00:00:00.001 Started by upstream project "autotest-per-patch" build number 127191 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 24328 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.098 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:05.520 The recommended git tool is: git 00:00:05.520 using credential 00000000-0000-0000-0000-000000000002 00:00:05.522 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:05.537 Fetching changes from the remote Git repository 00:00:05.538 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:05.551 Using shallow fetch with depth 1 00:00:05.551 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:05.551 > git --version # timeout=10 00:00:05.561 > git --version # 'git version 2.39.2' 00:00:05.561 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:05.573 Setting http proxy: proxy-dmz.intel.com:911 00:00:05.573 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/41/22241/26 # timeout=5 00:00:10.774 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:10.785 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:10.796 Checking out Revision 124d5bb683991a063807d96399433650600a89c8 (FETCH_HEAD) 00:00:10.796 > git config core.sparsecheckout # timeout=10 00:00:10.806 > git read-tree -mu HEAD # timeout=10 00:00:10.823 > git checkout -f 124d5bb683991a063807d96399433650600a89c8 # timeout=5 00:00:10.841 Commit message: "jenkins/jjb-config: Add release-build jobs to per-patch and nightly" 00:00:10.842 > git rev-list --no-walk bb4bbb76f2437bc8cff7e7e4a466bce7165cd7f0 # timeout=10 00:00:10.922 [Pipeline] Start of Pipeline 00:00:10.933 [Pipeline] library 00:00:10.934 Loading library shm_lib@master 00:00:10.934 Library shm_lib@master is cached. Copying from home. 00:00:10.950 [Pipeline] node 00:00:10.959 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:10.960 [Pipeline] { 00:00:10.972 [Pipeline] catchError 00:00:10.973 [Pipeline] { 00:00:10.986 [Pipeline] wrap 00:00:10.995 [Pipeline] { 00:00:11.001 [Pipeline] stage 00:00:11.003 [Pipeline] { (Prologue) 00:00:11.221 [Pipeline] sh 00:00:11.498 + logger -p user.info -t JENKINS-CI 00:00:11.516 [Pipeline] echo 00:00:11.518 Node: GP6 00:00:11.525 [Pipeline] sh 00:00:11.817 [Pipeline] setCustomBuildProperty 00:00:11.828 [Pipeline] echo 00:00:11.830 Cleanup processes 00:00:11.835 [Pipeline] sh 00:00:12.118 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:12.118 681556 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:12.144 [Pipeline] sh 00:00:12.427 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:12.427 ++ grep -v 'sudo pgrep' 00:00:12.427 ++ awk '{print $1}' 00:00:12.427 + sudo kill -9 00:00:12.427 + true 00:00:12.444 [Pipeline] cleanWs 00:00:12.456 [WS-CLEANUP] Deleting project workspace... 00:00:12.456 [WS-CLEANUP] Deferred wipeout is used... 00:00:12.463 [WS-CLEANUP] done 00:00:12.466 [Pipeline] setCustomBuildProperty 00:00:12.478 [Pipeline] sh 00:00:12.756 + sudo git config --global --replace-all safe.directory '*' 00:00:12.833 [Pipeline] httpRequest 00:00:12.859 [Pipeline] echo 00:00:12.860 Sorcerer 10.211.164.101 is alive 00:00:12.867 [Pipeline] httpRequest 00:00:12.872 HttpMethod: GET 00:00:12.872 URL: http://10.211.164.101/packages/jbp_124d5bb683991a063807d96399433650600a89c8.tar.gz 00:00:12.873 Sending request to url: http://10.211.164.101/packages/jbp_124d5bb683991a063807d96399433650600a89c8.tar.gz 00:00:12.897 Response Code: HTTP/1.1 200 OK 00:00:12.898 Success: Status code 200 is in the accepted range: 200,404 00:00:12.898 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_124d5bb683991a063807d96399433650600a89c8.tar.gz 00:00:30.733 [Pipeline] sh 00:00:31.016 + tar --no-same-owner -xf jbp_124d5bb683991a063807d96399433650600a89c8.tar.gz 00:00:31.044 [Pipeline] httpRequest 00:00:31.065 [Pipeline] echo 00:00:31.066 Sorcerer 10.211.164.101 is alive 00:00:31.072 [Pipeline] httpRequest 00:00:31.076 HttpMethod: GET 00:00:31.077 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:31.077 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:31.083 Response Code: HTTP/1.1 200 OK 00:00:31.084 Success: Status code 200 is in the accepted range: 200,404 00:00:31.084 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:02:51.703 [Pipeline] sh 00:02:51.985 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:02:55.284 [Pipeline] sh 00:02:55.568 + git -C spdk log --oneline -n5 00:02:55.568 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:02:55.568 fc2398dfa raid: clear base bdev configure_cb after executing 00:02:55.568 5558f3f50 raid: complete bdev_raid_create after sb is written 00:02:55.568 d005e023b raid: fix empty slot not updated in sb after resize 00:02:55.568 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:02:55.580 [Pipeline] } 00:02:55.598 [Pipeline] // stage 00:02:55.607 [Pipeline] stage 00:02:55.608 [Pipeline] { (Prepare) 00:02:55.625 [Pipeline] writeFile 00:02:55.643 [Pipeline] sh 00:02:55.923 + logger -p user.info -t JENKINS-CI 00:02:55.936 [Pipeline] sh 00:02:56.216 + logger -p user.info -t JENKINS-CI 00:02:56.228 [Pipeline] sh 00:02:56.508 + cat autorun-spdk.conf 00:02:56.508 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:56.508 SPDK_TEST_NVMF=1 00:02:56.508 SPDK_TEST_NVME_CLI=1 00:02:56.508 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:56.508 SPDK_TEST_NVMF_NICS=e810 00:02:56.508 SPDK_TEST_VFIOUSER=1 00:02:56.508 SPDK_RUN_UBSAN=1 00:02:56.508 NET_TYPE=phy 00:02:56.515 RUN_NIGHTLY=0 00:02:56.521 [Pipeline] readFile 00:02:56.548 [Pipeline] withEnv 00:02:56.550 [Pipeline] { 00:02:56.563 [Pipeline] sh 00:02:56.851 + set -ex 00:02:56.851 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:56.851 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:56.851 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:56.851 ++ SPDK_TEST_NVMF=1 00:02:56.851 ++ SPDK_TEST_NVME_CLI=1 00:02:56.851 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:56.851 ++ SPDK_TEST_NVMF_NICS=e810 00:02:56.851 ++ SPDK_TEST_VFIOUSER=1 00:02:56.851 ++ SPDK_RUN_UBSAN=1 00:02:56.851 ++ NET_TYPE=phy 00:02:56.851 ++ RUN_NIGHTLY=0 00:02:56.851 + case $SPDK_TEST_NVMF_NICS in 00:02:56.851 + DRIVERS=ice 00:02:56.851 + [[ tcp == \r\d\m\a ]] 00:02:56.851 + [[ -n ice ]] 00:02:56.851 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:56.851 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:56.851 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:56.851 rmmod: ERROR: Module irdma is not currently loaded 00:02:56.851 rmmod: ERROR: Module i40iw is not currently loaded 00:02:56.851 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:56.851 + true 00:02:56.851 + for D in $DRIVERS 00:02:56.851 + sudo modprobe ice 00:02:56.851 + exit 0 00:02:56.861 [Pipeline] } 00:02:56.879 [Pipeline] // withEnv 00:02:56.884 [Pipeline] } 00:02:56.901 [Pipeline] // stage 00:02:56.911 [Pipeline] catchError 00:02:56.912 [Pipeline] { 00:02:56.928 [Pipeline] timeout 00:02:56.929 Timeout set to expire in 50 min 00:02:56.930 [Pipeline] { 00:02:56.944 [Pipeline] stage 00:02:56.946 [Pipeline] { (Tests) 00:02:56.960 [Pipeline] sh 00:02:57.240 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:57.240 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:57.240 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:57.240 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:57.240 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:57.240 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:57.240 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:57.240 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:57.240 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:57.240 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:57.240 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:57.240 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:57.240 + source /etc/os-release 00:02:57.240 ++ NAME='Fedora Linux' 00:02:57.240 ++ VERSION='38 (Cloud Edition)' 00:02:57.240 ++ ID=fedora 00:02:57.240 ++ VERSION_ID=38 00:02:57.240 ++ VERSION_CODENAME= 00:02:57.240 ++ PLATFORM_ID=platform:f38 00:02:57.240 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:57.240 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:57.240 ++ LOGO=fedora-logo-icon 00:02:57.240 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:57.240 ++ HOME_URL=https://fedoraproject.org/ 00:02:57.240 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:57.240 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:57.240 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:57.240 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:57.240 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:57.240 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:57.240 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:57.240 ++ SUPPORT_END=2024-05-14 00:02:57.240 ++ VARIANT='Cloud Edition' 00:02:57.240 ++ VARIANT_ID=cloud 00:02:57.240 + uname -a 00:02:57.240 Linux spdk-gp-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:57.240 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:58.615 Hugepages 00:02:58.615 node hugesize free / total 00:02:58.615 node0 1048576kB 0 / 0 00:02:58.615 node0 2048kB 0 / 0 00:02:58.615 node1 1048576kB 0 / 0 00:02:58.615 node1 2048kB 0 / 0 00:02:58.615 00:02:58.615 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:58.615 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:02:58.615 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:02:58.615 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:02:58.615 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:02:58.615 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:02:58.615 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:02:58.615 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:02:58.616 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:02:58.616 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:58.616 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:02:58.616 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:02:58.616 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:02:58.616 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:02:58.616 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:02:58.616 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:02:58.616 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:02:58.616 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:02:58.616 + rm -f /tmp/spdk-ld-path 00:02:58.616 + source autorun-spdk.conf 00:02:58.616 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:58.616 ++ SPDK_TEST_NVMF=1 00:02:58.616 ++ SPDK_TEST_NVME_CLI=1 00:02:58.616 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:58.616 ++ SPDK_TEST_NVMF_NICS=e810 00:02:58.616 ++ SPDK_TEST_VFIOUSER=1 00:02:58.616 ++ SPDK_RUN_UBSAN=1 00:02:58.616 ++ NET_TYPE=phy 00:02:58.616 ++ RUN_NIGHTLY=0 00:02:58.616 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:58.616 + [[ -n '' ]] 00:02:58.616 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:58.616 + for M in /var/spdk/build-*-manifest.txt 00:02:58.616 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:58.616 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:58.616 + for M in /var/spdk/build-*-manifest.txt 00:02:58.616 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:58.616 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:58.616 ++ uname 00:02:58.616 + [[ Linux == \L\i\n\u\x ]] 00:02:58.616 + sudo dmesg -T 00:02:58.616 + sudo dmesg --clear 00:02:58.616 + dmesg_pid=682858 00:02:58.616 + [[ Fedora Linux == FreeBSD ]] 00:02:58.616 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:58.616 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:58.616 + sudo dmesg -Tw 00:02:58.616 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:58.616 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:58.616 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:58.616 + [[ -x /usr/src/fio-static/fio ]] 00:02:58.616 + export FIO_BIN=/usr/src/fio-static/fio 00:02:58.616 + FIO_BIN=/usr/src/fio-static/fio 00:02:58.616 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:58.616 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:58.616 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:58.616 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:58.616 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:58.616 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:58.616 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:58.616 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:58.616 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:58.616 Test configuration: 00:02:58.616 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:58.616 SPDK_TEST_NVMF=1 00:02:58.616 SPDK_TEST_NVME_CLI=1 00:02:58.616 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:58.616 SPDK_TEST_NVMF_NICS=e810 00:02:58.616 SPDK_TEST_VFIOUSER=1 00:02:58.616 SPDK_RUN_UBSAN=1 00:02:58.616 NET_TYPE=phy 00:02:58.616 RUN_NIGHTLY=0 18:53:50 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:58.616 18:53:50 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:58.616 18:53:50 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:58.616 18:53:50 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:58.616 18:53:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.616 18:53:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.616 18:53:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.616 18:53:50 -- paths/export.sh@5 -- $ export PATH 00:02:58.616 18:53:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.616 18:53:50 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:58.616 18:53:50 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:58.616 18:53:50 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721926430.XXXXXX 00:02:58.616 18:53:50 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721926430.C1QLc6 00:02:58.616 18:53:50 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:58.616 18:53:50 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:02:58.616 18:53:50 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:58.616 18:53:50 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:58.616 18:53:50 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:58.616 18:53:50 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:58.616 18:53:50 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:02:58.616 18:53:50 -- common/autotest_common.sh@10 -- $ set +x 00:02:58.616 18:53:50 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:58.616 18:53:50 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:58.616 18:53:50 -- pm/common@17 -- $ local monitor 00:02:58.616 18:53:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.616 18:53:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.616 18:53:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.617 18:53:50 -- pm/common@21 -- $ date +%s 00:02:58.617 18:53:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.617 18:53:50 -- pm/common@21 -- $ date +%s 00:02:58.617 18:53:50 -- pm/common@25 -- $ sleep 1 00:02:58.617 18:53:50 -- pm/common@21 -- $ date +%s 00:02:58.617 18:53:50 -- pm/common@21 -- $ date +%s 00:02:58.617 18:53:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721926430 00:02:58.617 18:53:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721926430 00:02:58.617 18:53:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721926430 00:02:58.617 18:53:50 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721926430 00:02:58.617 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721926430_collect-vmstat.pm.log 00:02:58.617 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721926430_collect-cpu-load.pm.log 00:02:58.617 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721926430_collect-cpu-temp.pm.log 00:02:58.617 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721926430_collect-bmc-pm.bmc.pm.log 00:02:59.552 18:53:51 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:59.552 18:53:51 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:59.552 18:53:51 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:59.552 18:53:51 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:59.552 18:53:51 -- spdk/autobuild.sh@16 -- $ date -u 00:02:59.552 Thu Jul 25 04:53:52 PM UTC 2024 00:02:59.552 18:53:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:59.552 v24.09-pre-321-g704257090 00:02:59.552 18:53:52 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:59.552 18:53:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:59.552 18:53:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:59.552 18:53:52 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:59.552 18:53:52 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:59.552 18:53:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.811 ************************************ 00:02:59.811 START TEST ubsan 00:02:59.811 ************************************ 00:02:59.811 18:53:52 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:59.811 using ubsan 00:02:59.811 00:02:59.811 real 0m0.000s 00:02:59.811 user 0m0.000s 00:02:59.811 sys 0m0.000s 00:02:59.811 18:53:52 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:59.811 18:53:52 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:59.811 ************************************ 00:02:59.811 END TEST ubsan 00:02:59.811 ************************************ 00:02:59.811 18:53:52 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:59.811 18:53:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:59.811 18:53:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:59.811 18:53:52 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:59.811 18:53:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:59.811 18:53:52 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:59.811 18:53:52 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:59.811 18:53:52 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:59.811 18:53:52 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:59.811 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:59.811 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:00.070 Using 'verbs' RDMA provider 00:03:10.611 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:20.595 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:20.596 Creating mk/config.mk...done. 00:03:20.596 Creating mk/cc.flags.mk...done. 00:03:20.596 Type 'make' to build. 00:03:20.596 18:54:12 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:20.596 18:54:12 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:20.596 18:54:12 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:20.596 18:54:12 -- common/autotest_common.sh@10 -- $ set +x 00:03:20.596 ************************************ 00:03:20.596 START TEST make 00:03:20.596 ************************************ 00:03:20.596 18:54:12 make -- common/autotest_common.sh@1125 -- $ make -j48 00:03:20.596 make[1]: Nothing to be done for 'all'. 00:03:21.979 The Meson build system 00:03:21.979 Version: 1.3.1 00:03:21.979 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:21.979 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:21.979 Build type: native build 00:03:21.979 Project name: libvfio-user 00:03:21.979 Project version: 0.0.1 00:03:21.979 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:21.979 C linker for the host machine: cc ld.bfd 2.39-16 00:03:21.979 Host machine cpu family: x86_64 00:03:21.979 Host machine cpu: x86_64 00:03:21.979 Run-time dependency threads found: YES 00:03:21.979 Library dl found: YES 00:03:21.979 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:21.979 Run-time dependency json-c found: YES 0.17 00:03:21.979 Run-time dependency cmocka found: YES 1.1.7 00:03:21.979 Program pytest-3 found: NO 00:03:21.979 Program flake8 found: NO 00:03:21.979 Program misspell-fixer found: NO 00:03:21.979 Program restructuredtext-lint found: NO 00:03:21.979 Program valgrind found: YES (/usr/bin/valgrind) 00:03:21.979 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:21.979 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:21.979 Compiler for C supports arguments -Wwrite-strings: YES 00:03:21.979 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:21.979 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:21.979 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:21.979 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:21.979 Build targets in project: 8 00:03:21.979 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:21.980 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:21.980 00:03:21.980 libvfio-user 0.0.1 00:03:21.980 00:03:21.980 User defined options 00:03:21.980 buildtype : debug 00:03:21.980 default_library: shared 00:03:21.980 libdir : /usr/local/lib 00:03:21.980 00:03:21.980 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:22.551 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:22.820 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:22.820 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:22.820 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:22.820 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:22.820 [5/37] Compiling C object samples/null.p/null.c.o 00:03:22.820 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:22.820 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:22.820 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:22.820 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:22.820 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:22.820 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:22.820 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:22.820 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:22.820 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:22.820 [15/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:22.820 [16/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:22.820 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:22.820 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:22.820 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:23.080 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:23.080 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:23.080 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:23.080 [23/37] Compiling C object samples/server.p/server.c.o 00:03:23.080 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:23.080 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:23.080 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:23.080 [27/37] Compiling C object samples/client.p/client.c.o 00:03:23.080 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:03:23.080 [29/37] Linking target samples/client 00:03:23.080 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:23.080 [31/37] Linking target test/unit_tests 00:03:23.341 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:23.341 [33/37] Linking target samples/server 00:03:23.341 [34/37] Linking target samples/gpio-pci-idio-16 00:03:23.341 [35/37] Linking target samples/null 00:03:23.341 [36/37] Linking target samples/lspci 00:03:23.341 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:23.341 INFO: autodetecting backend as ninja 00:03:23.341 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:23.600 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:24.173 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:24.173 ninja: no work to do. 00:03:29.451 The Meson build system 00:03:29.451 Version: 1.3.1 00:03:29.451 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:29.451 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:29.451 Build type: native build 00:03:29.451 Program cat found: YES (/usr/bin/cat) 00:03:29.451 Project name: DPDK 00:03:29.451 Project version: 24.03.0 00:03:29.451 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:29.451 C linker for the host machine: cc ld.bfd 2.39-16 00:03:29.451 Host machine cpu family: x86_64 00:03:29.451 Host machine cpu: x86_64 00:03:29.451 Message: ## Building in Developer Mode ## 00:03:29.451 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:29.451 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:29.451 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:29.451 Program python3 found: YES (/usr/bin/python3) 00:03:29.451 Program cat found: YES (/usr/bin/cat) 00:03:29.451 Compiler for C supports arguments -march=native: YES 00:03:29.451 Checking for size of "void *" : 8 00:03:29.451 Checking for size of "void *" : 8 (cached) 00:03:29.451 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:29.451 Library m found: YES 00:03:29.451 Library numa found: YES 00:03:29.451 Has header "numaif.h" : YES 00:03:29.451 Library fdt found: NO 00:03:29.451 Library execinfo found: NO 00:03:29.451 Has header "execinfo.h" : YES 00:03:29.451 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:29.451 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:29.451 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:29.451 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:29.451 Run-time dependency openssl found: YES 3.0.9 00:03:29.451 Run-time dependency libpcap found: YES 1.10.4 00:03:29.451 Has header "pcap.h" with dependency libpcap: YES 00:03:29.451 Compiler for C supports arguments -Wcast-qual: YES 00:03:29.451 Compiler for C supports arguments -Wdeprecated: YES 00:03:29.451 Compiler for C supports arguments -Wformat: YES 00:03:29.451 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:29.451 Compiler for C supports arguments -Wformat-security: NO 00:03:29.451 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:29.451 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:29.451 Compiler for C supports arguments -Wnested-externs: YES 00:03:29.451 Compiler for C supports arguments -Wold-style-definition: YES 00:03:29.451 Compiler for C supports arguments -Wpointer-arith: YES 00:03:29.451 Compiler for C supports arguments -Wsign-compare: YES 00:03:29.451 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:29.451 Compiler for C supports arguments -Wundef: YES 00:03:29.451 Compiler for C supports arguments -Wwrite-strings: YES 00:03:29.451 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:29.451 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:29.451 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:29.451 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:29.451 Program objdump found: YES (/usr/bin/objdump) 00:03:29.451 Compiler for C supports arguments -mavx512f: YES 00:03:29.451 Checking if "AVX512 checking" compiles: YES 00:03:29.451 Fetching value of define "__SSE4_2__" : 1 00:03:29.451 Fetching value of define "__AES__" : 1 00:03:29.451 Fetching value of define "__AVX__" : 1 00:03:29.451 Fetching value of define "__AVX2__" : (undefined) 00:03:29.451 Fetching value of define "__AVX512BW__" : (undefined) 00:03:29.451 Fetching value of define "__AVX512CD__" : (undefined) 00:03:29.451 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:29.451 Fetching value of define "__AVX512F__" : (undefined) 00:03:29.451 Fetching value of define "__AVX512VL__" : (undefined) 00:03:29.451 Fetching value of define "__PCLMUL__" : 1 00:03:29.451 Fetching value of define "__RDRND__" : 1 00:03:29.451 Fetching value of define "__RDSEED__" : (undefined) 00:03:29.451 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:29.451 Fetching value of define "__znver1__" : (undefined) 00:03:29.451 Fetching value of define "__znver2__" : (undefined) 00:03:29.451 Fetching value of define "__znver3__" : (undefined) 00:03:29.451 Fetching value of define "__znver4__" : (undefined) 00:03:29.451 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:29.451 Message: lib/log: Defining dependency "log" 00:03:29.451 Message: lib/kvargs: Defining dependency "kvargs" 00:03:29.451 Message: lib/telemetry: Defining dependency "telemetry" 00:03:29.451 Checking for function "getentropy" : NO 00:03:29.451 Message: lib/eal: Defining dependency "eal" 00:03:29.451 Message: lib/ring: Defining dependency "ring" 00:03:29.451 Message: lib/rcu: Defining dependency "rcu" 00:03:29.451 Message: lib/mempool: Defining dependency "mempool" 00:03:29.451 Message: lib/mbuf: Defining dependency "mbuf" 00:03:29.451 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:29.451 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:29.451 Compiler for C supports arguments -mpclmul: YES 00:03:29.451 Compiler for C supports arguments -maes: YES 00:03:29.451 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:29.451 Compiler for C supports arguments -mavx512bw: YES 00:03:29.451 Compiler for C supports arguments -mavx512dq: YES 00:03:29.451 Compiler for C supports arguments -mavx512vl: YES 00:03:29.451 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:29.451 Compiler for C supports arguments -mavx2: YES 00:03:29.451 Compiler for C supports arguments -mavx: YES 00:03:29.451 Message: lib/net: Defining dependency "net" 00:03:29.451 Message: lib/meter: Defining dependency "meter" 00:03:29.451 Message: lib/ethdev: Defining dependency "ethdev" 00:03:29.451 Message: lib/pci: Defining dependency "pci" 00:03:29.451 Message: lib/cmdline: Defining dependency "cmdline" 00:03:29.451 Message: lib/hash: Defining dependency "hash" 00:03:29.451 Message: lib/timer: Defining dependency "timer" 00:03:29.451 Message: lib/compressdev: Defining dependency "compressdev" 00:03:29.451 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:29.451 Message: lib/dmadev: Defining dependency "dmadev" 00:03:29.451 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:29.451 Message: lib/power: Defining dependency "power" 00:03:29.451 Message: lib/reorder: Defining dependency "reorder" 00:03:29.451 Message: lib/security: Defining dependency "security" 00:03:29.451 Has header "linux/userfaultfd.h" : YES 00:03:29.451 Has header "linux/vduse.h" : YES 00:03:29.451 Message: lib/vhost: Defining dependency "vhost" 00:03:29.451 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:29.451 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:29.451 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:29.451 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:29.451 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:29.451 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:29.451 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:29.451 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:29.451 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:29.451 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:29.451 Program doxygen found: YES (/usr/bin/doxygen) 00:03:29.451 Configuring doxy-api-html.conf using configuration 00:03:29.451 Configuring doxy-api-man.conf using configuration 00:03:29.451 Program mandb found: YES (/usr/bin/mandb) 00:03:29.451 Program sphinx-build found: NO 00:03:29.451 Configuring rte_build_config.h using configuration 00:03:29.451 Message: 00:03:29.451 ================= 00:03:29.451 Applications Enabled 00:03:29.451 ================= 00:03:29.451 00:03:29.451 apps: 00:03:29.451 00:03:29.451 00:03:29.451 Message: 00:03:29.451 ================= 00:03:29.451 Libraries Enabled 00:03:29.451 ================= 00:03:29.451 00:03:29.451 libs: 00:03:29.451 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:29.451 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:29.451 cryptodev, dmadev, power, reorder, security, vhost, 00:03:29.451 00:03:29.451 Message: 00:03:29.451 =============== 00:03:29.451 Drivers Enabled 00:03:29.451 =============== 00:03:29.451 00:03:29.451 common: 00:03:29.451 00:03:29.451 bus: 00:03:29.451 pci, vdev, 00:03:29.451 mempool: 00:03:29.451 ring, 00:03:29.451 dma: 00:03:29.451 00:03:29.451 net: 00:03:29.451 00:03:29.451 crypto: 00:03:29.451 00:03:29.451 compress: 00:03:29.451 00:03:29.451 vdpa: 00:03:29.451 00:03:29.451 00:03:29.451 Message: 00:03:29.451 ================= 00:03:29.451 Content Skipped 00:03:29.451 ================= 00:03:29.451 00:03:29.451 apps: 00:03:29.451 dumpcap: explicitly disabled via build config 00:03:29.451 graph: explicitly disabled via build config 00:03:29.451 pdump: explicitly disabled via build config 00:03:29.451 proc-info: explicitly disabled via build config 00:03:29.451 test-acl: explicitly disabled via build config 00:03:29.451 test-bbdev: explicitly disabled via build config 00:03:29.451 test-cmdline: explicitly disabled via build config 00:03:29.451 test-compress-perf: explicitly disabled via build config 00:03:29.451 test-crypto-perf: explicitly disabled via build config 00:03:29.451 test-dma-perf: explicitly disabled via build config 00:03:29.451 test-eventdev: explicitly disabled via build config 00:03:29.451 test-fib: explicitly disabled via build config 00:03:29.451 test-flow-perf: explicitly disabled via build config 00:03:29.451 test-gpudev: explicitly disabled via build config 00:03:29.451 test-mldev: explicitly disabled via build config 00:03:29.451 test-pipeline: explicitly disabled via build config 00:03:29.451 test-pmd: explicitly disabled via build config 00:03:29.451 test-regex: explicitly disabled via build config 00:03:29.451 test-sad: explicitly disabled via build config 00:03:29.451 test-security-perf: explicitly disabled via build config 00:03:29.451 00:03:29.451 libs: 00:03:29.451 argparse: explicitly disabled via build config 00:03:29.451 metrics: explicitly disabled via build config 00:03:29.451 acl: explicitly disabled via build config 00:03:29.451 bbdev: explicitly disabled via build config 00:03:29.451 bitratestats: explicitly disabled via build config 00:03:29.451 bpf: explicitly disabled via build config 00:03:29.451 cfgfile: explicitly disabled via build config 00:03:29.451 distributor: explicitly disabled via build config 00:03:29.451 efd: explicitly disabled via build config 00:03:29.451 eventdev: explicitly disabled via build config 00:03:29.451 dispatcher: explicitly disabled via build config 00:03:29.451 gpudev: explicitly disabled via build config 00:03:29.451 gro: explicitly disabled via build config 00:03:29.451 gso: explicitly disabled via build config 00:03:29.451 ip_frag: explicitly disabled via build config 00:03:29.451 jobstats: explicitly disabled via build config 00:03:29.451 latencystats: explicitly disabled via build config 00:03:29.451 lpm: explicitly disabled via build config 00:03:29.451 member: explicitly disabled via build config 00:03:29.451 pcapng: explicitly disabled via build config 00:03:29.451 rawdev: explicitly disabled via build config 00:03:29.451 regexdev: explicitly disabled via build config 00:03:29.451 mldev: explicitly disabled via build config 00:03:29.451 rib: explicitly disabled via build config 00:03:29.451 sched: explicitly disabled via build config 00:03:29.451 stack: explicitly disabled via build config 00:03:29.451 ipsec: explicitly disabled via build config 00:03:29.451 pdcp: explicitly disabled via build config 00:03:29.451 fib: explicitly disabled via build config 00:03:29.451 port: explicitly disabled via build config 00:03:29.451 pdump: explicitly disabled via build config 00:03:29.451 table: explicitly disabled via build config 00:03:29.451 pipeline: explicitly disabled via build config 00:03:29.451 graph: explicitly disabled via build config 00:03:29.451 node: explicitly disabled via build config 00:03:29.451 00:03:29.451 drivers: 00:03:29.451 common/cpt: not in enabled drivers build config 00:03:29.451 common/dpaax: not in enabled drivers build config 00:03:29.451 common/iavf: not in enabled drivers build config 00:03:29.451 common/idpf: not in enabled drivers build config 00:03:29.451 common/ionic: not in enabled drivers build config 00:03:29.451 common/mvep: not in enabled drivers build config 00:03:29.451 common/octeontx: not in enabled drivers build config 00:03:29.451 bus/auxiliary: not in enabled drivers build config 00:03:29.451 bus/cdx: not in enabled drivers build config 00:03:29.451 bus/dpaa: not in enabled drivers build config 00:03:29.451 bus/fslmc: not in enabled drivers build config 00:03:29.451 bus/ifpga: not in enabled drivers build config 00:03:29.451 bus/platform: not in enabled drivers build config 00:03:29.451 bus/uacce: not in enabled drivers build config 00:03:29.451 bus/vmbus: not in enabled drivers build config 00:03:29.451 common/cnxk: not in enabled drivers build config 00:03:29.451 common/mlx5: not in enabled drivers build config 00:03:29.451 common/nfp: not in enabled drivers build config 00:03:29.451 common/nitrox: not in enabled drivers build config 00:03:29.451 common/qat: not in enabled drivers build config 00:03:29.451 common/sfc_efx: not in enabled drivers build config 00:03:29.452 mempool/bucket: not in enabled drivers build config 00:03:29.452 mempool/cnxk: not in enabled drivers build config 00:03:29.452 mempool/dpaa: not in enabled drivers build config 00:03:29.452 mempool/dpaa2: not in enabled drivers build config 00:03:29.452 mempool/octeontx: not in enabled drivers build config 00:03:29.452 mempool/stack: not in enabled drivers build config 00:03:29.452 dma/cnxk: not in enabled drivers build config 00:03:29.452 dma/dpaa: not in enabled drivers build config 00:03:29.452 dma/dpaa2: not in enabled drivers build config 00:03:29.452 dma/hisilicon: not in enabled drivers build config 00:03:29.452 dma/idxd: not in enabled drivers build config 00:03:29.452 dma/ioat: not in enabled drivers build config 00:03:29.452 dma/skeleton: not in enabled drivers build config 00:03:29.452 net/af_packet: not in enabled drivers build config 00:03:29.452 net/af_xdp: not in enabled drivers build config 00:03:29.452 net/ark: not in enabled drivers build config 00:03:29.452 net/atlantic: not in enabled drivers build config 00:03:29.452 net/avp: not in enabled drivers build config 00:03:29.452 net/axgbe: not in enabled drivers build config 00:03:29.452 net/bnx2x: not in enabled drivers build config 00:03:29.452 net/bnxt: not in enabled drivers build config 00:03:29.452 net/bonding: not in enabled drivers build config 00:03:29.452 net/cnxk: not in enabled drivers build config 00:03:29.452 net/cpfl: not in enabled drivers build config 00:03:29.452 net/cxgbe: not in enabled drivers build config 00:03:29.452 net/dpaa: not in enabled drivers build config 00:03:29.452 net/dpaa2: not in enabled drivers build config 00:03:29.452 net/e1000: not in enabled drivers build config 00:03:29.452 net/ena: not in enabled drivers build config 00:03:29.452 net/enetc: not in enabled drivers build config 00:03:29.452 net/enetfec: not in enabled drivers build config 00:03:29.452 net/enic: not in enabled drivers build config 00:03:29.452 net/failsafe: not in enabled drivers build config 00:03:29.452 net/fm10k: not in enabled drivers build config 00:03:29.452 net/gve: not in enabled drivers build config 00:03:29.452 net/hinic: not in enabled drivers build config 00:03:29.452 net/hns3: not in enabled drivers build config 00:03:29.452 net/i40e: not in enabled drivers build config 00:03:29.452 net/iavf: not in enabled drivers build config 00:03:29.452 net/ice: not in enabled drivers build config 00:03:29.452 net/idpf: not in enabled drivers build config 00:03:29.452 net/igc: not in enabled drivers build config 00:03:29.452 net/ionic: not in enabled drivers build config 00:03:29.452 net/ipn3ke: not in enabled drivers build config 00:03:29.452 net/ixgbe: not in enabled drivers build config 00:03:29.452 net/mana: not in enabled drivers build config 00:03:29.452 net/memif: not in enabled drivers build config 00:03:29.452 net/mlx4: not in enabled drivers build config 00:03:29.452 net/mlx5: not in enabled drivers build config 00:03:29.452 net/mvneta: not in enabled drivers build config 00:03:29.452 net/mvpp2: not in enabled drivers build config 00:03:29.452 net/netvsc: not in enabled drivers build config 00:03:29.452 net/nfb: not in enabled drivers build config 00:03:29.452 net/nfp: not in enabled drivers build config 00:03:29.452 net/ngbe: not in enabled drivers build config 00:03:29.452 net/null: not in enabled drivers build config 00:03:29.452 net/octeontx: not in enabled drivers build config 00:03:29.452 net/octeon_ep: not in enabled drivers build config 00:03:29.452 net/pcap: not in enabled drivers build config 00:03:29.452 net/pfe: not in enabled drivers build config 00:03:29.452 net/qede: not in enabled drivers build config 00:03:29.452 net/ring: not in enabled drivers build config 00:03:29.452 net/sfc: not in enabled drivers build config 00:03:29.452 net/softnic: not in enabled drivers build config 00:03:29.452 net/tap: not in enabled drivers build config 00:03:29.452 net/thunderx: not in enabled drivers build config 00:03:29.452 net/txgbe: not in enabled drivers build config 00:03:29.452 net/vdev_netvsc: not in enabled drivers build config 00:03:29.452 net/vhost: not in enabled drivers build config 00:03:29.452 net/virtio: not in enabled drivers build config 00:03:29.452 net/vmxnet3: not in enabled drivers build config 00:03:29.452 raw/*: missing internal dependency, "rawdev" 00:03:29.452 crypto/armv8: not in enabled drivers build config 00:03:29.452 crypto/bcmfs: not in enabled drivers build config 00:03:29.452 crypto/caam_jr: not in enabled drivers build config 00:03:29.452 crypto/ccp: not in enabled drivers build config 00:03:29.452 crypto/cnxk: not in enabled drivers build config 00:03:29.452 crypto/dpaa_sec: not in enabled drivers build config 00:03:29.452 crypto/dpaa2_sec: not in enabled drivers build config 00:03:29.452 crypto/ipsec_mb: not in enabled drivers build config 00:03:29.452 crypto/mlx5: not in enabled drivers build config 00:03:29.452 crypto/mvsam: not in enabled drivers build config 00:03:29.452 crypto/nitrox: not in enabled drivers build config 00:03:29.452 crypto/null: not in enabled drivers build config 00:03:29.452 crypto/octeontx: not in enabled drivers build config 00:03:29.452 crypto/openssl: not in enabled drivers build config 00:03:29.452 crypto/scheduler: not in enabled drivers build config 00:03:29.452 crypto/uadk: not in enabled drivers build config 00:03:29.452 crypto/virtio: not in enabled drivers build config 00:03:29.452 compress/isal: not in enabled drivers build config 00:03:29.452 compress/mlx5: not in enabled drivers build config 00:03:29.452 compress/nitrox: not in enabled drivers build config 00:03:29.452 compress/octeontx: not in enabled drivers build config 00:03:29.452 compress/zlib: not in enabled drivers build config 00:03:29.452 regex/*: missing internal dependency, "regexdev" 00:03:29.452 ml/*: missing internal dependency, "mldev" 00:03:29.452 vdpa/ifc: not in enabled drivers build config 00:03:29.452 vdpa/mlx5: not in enabled drivers build config 00:03:29.452 vdpa/nfp: not in enabled drivers build config 00:03:29.452 vdpa/sfc: not in enabled drivers build config 00:03:29.452 event/*: missing internal dependency, "eventdev" 00:03:29.452 baseband/*: missing internal dependency, "bbdev" 00:03:29.452 gpu/*: missing internal dependency, "gpudev" 00:03:29.452 00:03:29.452 00:03:29.452 Build targets in project: 85 00:03:29.452 00:03:29.452 DPDK 24.03.0 00:03:29.452 00:03:29.452 User defined options 00:03:29.452 buildtype : debug 00:03:29.452 default_library : shared 00:03:29.452 libdir : lib 00:03:29.452 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:29.452 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:29.452 c_link_args : 00:03:29.452 cpu_instruction_set: native 00:03:29.452 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:29.452 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:29.452 enable_docs : false 00:03:29.452 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:29.452 enable_kmods : false 00:03:29.452 max_lcores : 128 00:03:29.452 tests : false 00:03:29.452 00:03:29.452 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:29.452 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:29.452 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:29.452 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:29.452 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:29.452 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:29.714 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:29.714 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:29.714 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:29.714 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:29.714 [9/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:29.714 [10/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:29.714 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:29.714 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:29.714 [13/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:29.714 [14/268] Linking static target lib/librte_kvargs.a 00:03:29.714 [15/268] Linking static target lib/librte_log.a 00:03:29.714 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:30.288 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.288 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:30.288 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:30.288 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:30.288 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:30.553 [22/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:30.553 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:30.553 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:30.553 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:30.553 [26/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:30.553 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:30.553 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:30.553 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:30.553 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:30.553 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:30.553 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:30.553 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:30.553 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:30.553 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:30.553 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:30.553 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:30.553 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:30.553 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:30.553 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:30.553 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:30.553 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:30.553 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:30.553 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:30.553 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:30.553 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:30.553 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:30.553 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:30.553 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:30.553 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:30.553 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:30.553 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:30.553 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:30.553 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:30.553 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:30.553 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:30.835 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:30.835 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:30.835 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:30.835 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:30.835 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:30.835 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:30.835 [63/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:30.835 [64/268] Linking static target lib/librte_telemetry.a 00:03:30.835 [65/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.125 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:31.125 [67/268] Linking target lib/librte_log.so.24.1 00:03:31.125 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:31.125 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:31.125 [70/268] Linking static target lib/librte_pci.a 00:03:31.125 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:31.391 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:31.391 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:31.391 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:31.392 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:31.392 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:31.392 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:31.392 [78/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:31.392 [79/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:31.392 [80/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:31.392 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:31.392 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:31.392 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:31.392 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:31.392 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:31.392 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:31.392 [87/268] Linking static target lib/librte_ring.a 00:03:31.392 [88/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:31.392 [89/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:31.392 [90/268] Linking target lib/librte_kvargs.so.24.1 00:03:31.392 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:31.392 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:31.656 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:31.656 [94/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:31.656 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:31.656 [96/268] Linking static target lib/librte_meter.a 00:03:31.656 [97/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:31.656 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:31.656 [99/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:31.656 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:31.656 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:31.656 [102/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:31.656 [103/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.656 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:31.656 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:31.656 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:31.656 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:31.656 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:31.656 [109/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:31.656 [110/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:31.656 [111/268] Linking static target lib/librte_rcu.a 00:03:31.656 [112/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:31.656 [113/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:31.656 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:31.656 [115/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:31.656 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:31.656 [117/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:31.656 [118/268] Linking static target lib/librte_mempool.a 00:03:31.656 [119/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:31.914 [120/268] Linking static target lib/librte_eal.a 00:03:31.914 [121/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:31.914 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:31.914 [123/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.914 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:31.914 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:31.914 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:31.914 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:31.914 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:31.914 [129/268] Linking target lib/librte_telemetry.so.24.1 00:03:31.914 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:31.914 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:31.914 [132/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:31.914 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:32.176 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:32.176 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:32.176 [136/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.176 [137/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:32.176 [138/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.176 [139/268] Linking static target lib/librte_net.a 00:03:32.176 [140/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:32.176 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:32.438 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:32.438 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:32.438 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:32.438 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:32.438 [146/268] Linking static target lib/librte_cmdline.a 00:03:32.438 [147/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.438 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:32.438 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:32.438 [150/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:32.697 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:32.697 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:32.697 [153/268] Linking static target lib/librte_timer.a 00:03:32.697 [154/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:32.697 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:32.697 [156/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.697 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:32.697 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:32.697 [159/268] Linking static target lib/librte_dmadev.a 00:03:32.697 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:32.697 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:32.697 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:32.697 [163/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:32.697 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:32.955 [165/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:32.955 [166/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:32.955 [167/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.955 [168/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:32.955 [169/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:32.955 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:32.955 [171/268] Linking static target lib/librte_power.a 00:03:32.955 [172/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.955 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:32.955 [174/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:32.955 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:32.955 [176/268] Linking static target lib/librte_hash.a 00:03:32.955 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:32.955 [178/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:32.955 [179/268] Linking static target lib/librte_compressdev.a 00:03:33.213 [180/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:33.213 [181/268] Linking static target lib/librte_mbuf.a 00:03:33.213 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:33.213 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:33.213 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:33.213 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:33.213 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:33.213 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:33.213 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:33.213 [189/268] Linking static target lib/librte_reorder.a 00:03:33.213 [190/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.213 [191/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:33.213 [192/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.213 [193/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:33.213 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:33.472 [195/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:33.472 [196/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:33.472 [197/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:33.472 [198/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:33.472 [199/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.472 [200/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:33.472 [201/268] Linking static target lib/librte_security.a 00:03:33.472 [202/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.472 [203/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.472 [204/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:33.472 [205/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:33.472 [206/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:33.472 [207/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:33.472 [208/268] Linking static target drivers/librte_bus_vdev.a 00:03:33.472 [209/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.472 [210/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:33.472 [211/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.472 [212/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:33.729 [213/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:33.729 [214/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:33.729 [215/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:33.729 [216/268] Linking static target drivers/librte_mempool_ring.a 00:03:33.729 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:33.729 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:33.729 [219/268] Linking static target drivers/librte_bus_pci.a 00:03:33.729 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.729 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:33.729 [222/268] Linking static target lib/librte_ethdev.a 00:03:33.729 [223/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:33.729 [224/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.729 [225/268] Linking static target lib/librte_cryptodev.a 00:03:33.987 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.921 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.293 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:38.190 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.190 [230/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.190 [231/268] Linking target lib/librte_eal.so.24.1 00:03:38.190 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:38.190 [233/268] Linking target lib/librte_ring.so.24.1 00:03:38.190 [234/268] Linking target lib/librte_pci.so.24.1 00:03:38.190 [235/268] Linking target lib/librte_timer.so.24.1 00:03:38.190 [236/268] Linking target lib/librte_meter.so.24.1 00:03:38.190 [237/268] Linking target lib/librte_dmadev.so.24.1 00:03:38.190 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:38.449 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:38.449 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:38.449 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:38.449 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:38.449 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:38.449 [244/268] Linking target lib/librte_rcu.so.24.1 00:03:38.449 [245/268] Linking target lib/librte_mempool.so.24.1 00:03:38.449 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:38.449 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:38.449 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:38.449 [249/268] Linking target lib/librte_mbuf.so.24.1 00:03:38.449 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:38.707 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:38.707 [252/268] Linking target lib/librte_net.so.24.1 00:03:38.707 [253/268] Linking target lib/librte_reorder.so.24.1 00:03:38.707 [254/268] Linking target lib/librte_compressdev.so.24.1 00:03:38.707 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:03:38.707 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:38.707 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:38.966 [258/268] Linking target lib/librte_security.so.24.1 00:03:38.966 [259/268] Linking target lib/librte_hash.so.24.1 00:03:38.966 [260/268] Linking target lib/librte_cmdline.so.24.1 00:03:38.966 [261/268] Linking target lib/librte_ethdev.so.24.1 00:03:38.966 [262/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:38.966 [263/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:38.966 [264/268] Linking target lib/librte_power.so.24.1 00:03:41.498 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:41.498 [266/268] Linking static target lib/librte_vhost.a 00:03:42.433 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.433 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:42.433 INFO: autodetecting backend as ninja 00:03:42.433 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:03:43.366 CC lib/log/log.o 00:03:43.366 CC lib/log/log_flags.o 00:03:43.366 CC lib/ut/ut.o 00:03:43.366 CC lib/log/log_deprecated.o 00:03:43.366 CC lib/ut_mock/mock.o 00:03:43.623 LIB libspdk_ut.a 00:03:43.623 LIB libspdk_log.a 00:03:43.623 LIB libspdk_ut_mock.a 00:03:43.623 SO libspdk_ut.so.2.0 00:03:43.623 SO libspdk_log.so.7.0 00:03:43.623 SO libspdk_ut_mock.so.6.0 00:03:43.623 SYMLINK libspdk_ut.so 00:03:43.623 SYMLINK libspdk_log.so 00:03:43.623 SYMLINK libspdk_ut_mock.so 00:03:43.881 CXX lib/trace_parser/trace.o 00:03:43.881 CC lib/dma/dma.o 00:03:43.881 CC lib/ioat/ioat.o 00:03:43.881 CC lib/util/base64.o 00:03:43.881 CC lib/util/bit_array.o 00:03:43.881 CC lib/util/cpuset.o 00:03:43.881 CC lib/util/crc16.o 00:03:43.881 CC lib/util/crc32.o 00:03:43.881 CC lib/util/crc32c.o 00:03:43.881 CC lib/util/crc32_ieee.o 00:03:43.881 CC lib/util/crc64.o 00:03:43.881 CC lib/util/dif.o 00:03:43.881 CC lib/util/fd.o 00:03:43.881 CC lib/util/fd_group.o 00:03:43.881 CC lib/util/file.o 00:03:43.881 CC lib/util/hexlify.o 00:03:43.881 CC lib/util/iov.o 00:03:43.881 CC lib/util/math.o 00:03:43.881 CC lib/util/net.o 00:03:43.881 CC lib/util/pipe.o 00:03:43.881 CC lib/util/strerror_tls.o 00:03:43.881 CC lib/util/string.o 00:03:43.881 CC lib/util/uuid.o 00:03:43.881 CC lib/util/xor.o 00:03:43.881 CC lib/util/zipf.o 00:03:43.881 CC lib/vfio_user/host/vfio_user_pci.o 00:03:43.881 CC lib/vfio_user/host/vfio_user.o 00:03:44.140 LIB libspdk_dma.a 00:03:44.140 SO libspdk_dma.so.4.0 00:03:44.140 SYMLINK libspdk_dma.so 00:03:44.140 LIB libspdk_ioat.a 00:03:44.140 SO libspdk_ioat.so.7.0 00:03:44.140 SYMLINK libspdk_ioat.so 00:03:44.140 LIB libspdk_vfio_user.a 00:03:44.398 SO libspdk_vfio_user.so.5.0 00:03:44.398 SYMLINK libspdk_vfio_user.so 00:03:44.398 LIB libspdk_util.a 00:03:44.398 SO libspdk_util.so.10.0 00:03:44.656 SYMLINK libspdk_util.so 00:03:44.656 CC lib/conf/conf.o 00:03:44.656 CC lib/rdma_provider/common.o 00:03:44.656 CC lib/json/json_parse.o 00:03:44.656 CC lib/idxd/idxd.o 00:03:44.656 CC lib/env_dpdk/env.o 00:03:44.656 CC lib/rdma_utils/rdma_utils.o 00:03:44.656 CC lib/json/json_util.o 00:03:44.656 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:44.656 CC lib/idxd/idxd_user.o 00:03:44.656 CC lib/env_dpdk/memory.o 00:03:44.656 CC lib/vmd/vmd.o 00:03:44.656 CC lib/json/json_write.o 00:03:44.656 CC lib/idxd/idxd_kernel.o 00:03:44.656 CC lib/env_dpdk/pci.o 00:03:44.656 CC lib/vmd/led.o 00:03:44.656 CC lib/env_dpdk/init.o 00:03:44.656 CC lib/env_dpdk/threads.o 00:03:44.656 CC lib/env_dpdk/pci_ioat.o 00:03:44.656 CC lib/env_dpdk/pci_virtio.o 00:03:44.656 CC lib/env_dpdk/pci_vmd.o 00:03:44.656 CC lib/env_dpdk/pci_idxd.o 00:03:44.656 CC lib/env_dpdk/pci_event.o 00:03:44.656 CC lib/env_dpdk/sigbus_handler.o 00:03:44.656 CC lib/env_dpdk/pci_dpdk.o 00:03:44.656 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:44.656 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:44.915 LIB libspdk_trace_parser.a 00:03:44.915 SO libspdk_trace_parser.so.5.0 00:03:44.915 SYMLINK libspdk_trace_parser.so 00:03:45.173 LIB libspdk_rdma_provider.a 00:03:45.173 LIB libspdk_rdma_utils.a 00:03:45.173 LIB libspdk_json.a 00:03:45.173 SO libspdk_rdma_provider.so.6.0 00:03:45.173 SO libspdk_rdma_utils.so.1.0 00:03:45.173 LIB libspdk_conf.a 00:03:45.173 SO libspdk_json.so.6.0 00:03:45.173 SO libspdk_conf.so.6.0 00:03:45.173 SYMLINK libspdk_rdma_utils.so 00:03:45.173 SYMLINK libspdk_rdma_provider.so 00:03:45.173 SYMLINK libspdk_json.so 00:03:45.173 SYMLINK libspdk_conf.so 00:03:45.431 CC lib/jsonrpc/jsonrpc_server.o 00:03:45.431 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:45.431 CC lib/jsonrpc/jsonrpc_client.o 00:03:45.431 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:45.431 LIB libspdk_idxd.a 00:03:45.431 SO libspdk_idxd.so.12.0 00:03:45.431 LIB libspdk_vmd.a 00:03:45.431 SYMLINK libspdk_idxd.so 00:03:45.431 SO libspdk_vmd.so.6.0 00:03:45.431 SYMLINK libspdk_vmd.so 00:03:45.689 LIB libspdk_jsonrpc.a 00:03:45.689 SO libspdk_jsonrpc.so.6.0 00:03:45.689 SYMLINK libspdk_jsonrpc.so 00:03:45.947 CC lib/rpc/rpc.o 00:03:46.205 LIB libspdk_rpc.a 00:03:46.205 SO libspdk_rpc.so.6.0 00:03:46.205 SYMLINK libspdk_rpc.so 00:03:46.205 CC lib/trace/trace.o 00:03:46.205 CC lib/trace/trace_flags.o 00:03:46.205 CC lib/trace/trace_rpc.o 00:03:46.205 CC lib/keyring/keyring.o 00:03:46.205 CC lib/keyring/keyring_rpc.o 00:03:46.205 CC lib/notify/notify.o 00:03:46.205 CC lib/notify/notify_rpc.o 00:03:46.463 LIB libspdk_notify.a 00:03:46.463 SO libspdk_notify.so.6.0 00:03:46.463 LIB libspdk_keyring.a 00:03:46.463 SYMLINK libspdk_notify.so 00:03:46.463 LIB libspdk_trace.a 00:03:46.463 SO libspdk_keyring.so.1.0 00:03:46.721 SO libspdk_trace.so.10.0 00:03:46.721 SYMLINK libspdk_keyring.so 00:03:46.721 SYMLINK libspdk_trace.so 00:03:46.721 LIB libspdk_env_dpdk.a 00:03:46.721 CC lib/sock/sock.o 00:03:46.721 CC lib/sock/sock_rpc.o 00:03:46.721 CC lib/thread/thread.o 00:03:46.721 CC lib/thread/iobuf.o 00:03:46.979 SO libspdk_env_dpdk.so.15.0 00:03:46.980 SYMLINK libspdk_env_dpdk.so 00:03:47.238 LIB libspdk_sock.a 00:03:47.238 SO libspdk_sock.so.10.0 00:03:47.238 SYMLINK libspdk_sock.so 00:03:47.496 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:47.496 CC lib/nvme/nvme_ctrlr.o 00:03:47.496 CC lib/nvme/nvme_fabric.o 00:03:47.496 CC lib/nvme/nvme_ns_cmd.o 00:03:47.496 CC lib/nvme/nvme_ns.o 00:03:47.496 CC lib/nvme/nvme_pcie_common.o 00:03:47.496 CC lib/nvme/nvme_pcie.o 00:03:47.496 CC lib/nvme/nvme_qpair.o 00:03:47.496 CC lib/nvme/nvme.o 00:03:47.496 CC lib/nvme/nvme_quirks.o 00:03:47.496 CC lib/nvme/nvme_transport.o 00:03:47.496 CC lib/nvme/nvme_discovery.o 00:03:47.496 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:47.496 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:47.496 CC lib/nvme/nvme_tcp.o 00:03:47.496 CC lib/nvme/nvme_opal.o 00:03:47.496 CC lib/nvme/nvme_io_msg.o 00:03:47.496 CC lib/nvme/nvme_poll_group.o 00:03:47.496 CC lib/nvme/nvme_zns.o 00:03:47.496 CC lib/nvme/nvme_stubs.o 00:03:47.496 CC lib/nvme/nvme_auth.o 00:03:47.496 CC lib/nvme/nvme_cuse.o 00:03:47.496 CC lib/nvme/nvme_rdma.o 00:03:47.496 CC lib/nvme/nvme_vfio_user.o 00:03:48.428 LIB libspdk_thread.a 00:03:48.428 SO libspdk_thread.so.10.1 00:03:48.428 SYMLINK libspdk_thread.so 00:03:48.697 CC lib/accel/accel.o 00:03:48.697 CC lib/init/json_config.o 00:03:48.697 CC lib/blob/blobstore.o 00:03:48.697 CC lib/virtio/virtio.o 00:03:48.697 CC lib/vfu_tgt/tgt_endpoint.o 00:03:48.697 CC lib/accel/accel_rpc.o 00:03:48.697 CC lib/init/subsystem.o 00:03:48.697 CC lib/blob/request.o 00:03:48.697 CC lib/vfu_tgt/tgt_rpc.o 00:03:48.697 CC lib/accel/accel_sw.o 00:03:48.697 CC lib/virtio/virtio_vhost_user.o 00:03:48.697 CC lib/init/subsystem_rpc.o 00:03:48.697 CC lib/blob/zeroes.o 00:03:48.697 CC lib/virtio/virtio_vfio_user.o 00:03:48.697 CC lib/init/rpc.o 00:03:48.697 CC lib/blob/blob_bs_dev.o 00:03:48.697 CC lib/virtio/virtio_pci.o 00:03:48.999 LIB libspdk_init.a 00:03:48.999 SO libspdk_init.so.5.0 00:03:48.999 LIB libspdk_virtio.a 00:03:48.999 LIB libspdk_vfu_tgt.a 00:03:48.999 SYMLINK libspdk_init.so 00:03:48.999 SO libspdk_vfu_tgt.so.3.0 00:03:48.999 SO libspdk_virtio.so.7.0 00:03:49.257 SYMLINK libspdk_vfu_tgt.so 00:03:49.257 SYMLINK libspdk_virtio.so 00:03:49.257 CC lib/event/app.o 00:03:49.257 CC lib/event/reactor.o 00:03:49.257 CC lib/event/log_rpc.o 00:03:49.257 CC lib/event/app_rpc.o 00:03:49.257 CC lib/event/scheduler_static.o 00:03:49.822 LIB libspdk_event.a 00:03:49.822 SO libspdk_event.so.14.0 00:03:49.822 LIB libspdk_accel.a 00:03:49.822 SYMLINK libspdk_event.so 00:03:49.822 SO libspdk_accel.so.16.0 00:03:49.822 SYMLINK libspdk_accel.so 00:03:49.822 LIB libspdk_nvme.a 00:03:50.080 SO libspdk_nvme.so.13.1 00:03:50.080 CC lib/bdev/bdev.o 00:03:50.080 CC lib/bdev/bdev_rpc.o 00:03:50.080 CC lib/bdev/bdev_zone.o 00:03:50.080 CC lib/bdev/part.o 00:03:50.080 CC lib/bdev/scsi_nvme.o 00:03:50.340 SYMLINK libspdk_nvme.so 00:03:51.714 LIB libspdk_blob.a 00:03:51.714 SO libspdk_blob.so.11.0 00:03:51.714 SYMLINK libspdk_blob.so 00:03:51.987 CC lib/lvol/lvol.o 00:03:51.987 CC lib/blobfs/blobfs.o 00:03:51.987 CC lib/blobfs/tree.o 00:03:52.553 LIB libspdk_bdev.a 00:03:52.553 SO libspdk_bdev.so.16.0 00:03:52.553 SYMLINK libspdk_bdev.so 00:03:52.817 LIB libspdk_blobfs.a 00:03:52.817 SO libspdk_blobfs.so.10.0 00:03:52.817 CC lib/nbd/nbd.o 00:03:52.817 CC lib/ublk/ublk.o 00:03:52.817 CC lib/scsi/dev.o 00:03:52.817 CC lib/nbd/nbd_rpc.o 00:03:52.817 CC lib/scsi/lun.o 00:03:52.817 CC lib/ublk/ublk_rpc.o 00:03:52.817 CC lib/ftl/ftl_core.o 00:03:52.817 CC lib/scsi/port.o 00:03:52.817 CC lib/ftl/ftl_init.o 00:03:52.817 CC lib/scsi/scsi.o 00:03:52.817 CC lib/ftl/ftl_layout.o 00:03:52.817 CC lib/scsi/scsi_bdev.o 00:03:52.817 CC lib/ftl/ftl_debug.o 00:03:52.817 CC lib/scsi/scsi_pr.o 00:03:52.817 CC lib/ftl/ftl_io.o 00:03:52.817 CC lib/scsi/scsi_rpc.o 00:03:52.817 CC lib/nvmf/ctrlr.o 00:03:52.817 CC lib/ftl/ftl_sb.o 00:03:52.817 CC lib/scsi/task.o 00:03:52.817 CC lib/nvmf/ctrlr_discovery.o 00:03:52.817 CC lib/ftl/ftl_l2p.o 00:03:52.817 CC lib/nvmf/ctrlr_bdev.o 00:03:52.817 CC lib/ftl/ftl_l2p_flat.o 00:03:52.817 CC lib/ftl/ftl_nv_cache.o 00:03:52.817 CC lib/nvmf/subsystem.o 00:03:52.817 CC lib/ftl/ftl_band.o 00:03:52.817 CC lib/nvmf/nvmf.o 00:03:52.817 CC lib/ftl/ftl_band_ops.o 00:03:52.817 CC lib/nvmf/nvmf_rpc.o 00:03:52.817 CC lib/ftl/ftl_writer.o 00:03:52.817 CC lib/nvmf/transport.o 00:03:52.817 CC lib/nvmf/tcp.o 00:03:52.817 CC lib/ftl/ftl_rq.o 00:03:52.817 CC lib/ftl/ftl_reloc.o 00:03:52.817 CC lib/nvmf/stubs.o 00:03:52.817 CC lib/ftl/ftl_l2p_cache.o 00:03:52.817 CC lib/nvmf/mdns_server.o 00:03:52.817 CC lib/ftl/ftl_p2l.o 00:03:52.817 CC lib/nvmf/vfio_user.o 00:03:52.817 CC lib/ftl/mngt/ftl_mngt.o 00:03:52.817 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:52.817 CC lib/nvmf/rdma.o 00:03:52.817 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:52.817 CC lib/nvmf/auth.o 00:03:52.817 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:52.817 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:52.817 SYMLINK libspdk_blobfs.so 00:03:52.817 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:52.817 LIB libspdk_lvol.a 00:03:53.079 SO libspdk_lvol.so.10.0 00:03:53.079 SYMLINK libspdk_lvol.so 00:03:53.079 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:53.079 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:53.079 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:53.079 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:53.079 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:53.079 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:53.347 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:53.347 CC lib/ftl/utils/ftl_conf.o 00:03:53.347 CC lib/ftl/utils/ftl_md.o 00:03:53.347 CC lib/ftl/utils/ftl_mempool.o 00:03:53.347 CC lib/ftl/utils/ftl_bitmap.o 00:03:53.347 CC lib/ftl/utils/ftl_property.o 00:03:53.347 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:53.347 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:53.347 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:53.347 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:53.347 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:53.347 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:53.347 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:53.347 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:53.347 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:53.606 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:53.606 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:53.606 CC lib/ftl/base/ftl_base_dev.o 00:03:53.606 CC lib/ftl/base/ftl_base_bdev.o 00:03:53.606 CC lib/ftl/ftl_trace.o 00:03:53.606 LIB libspdk_nbd.a 00:03:53.606 SO libspdk_nbd.so.7.0 00:03:53.606 SYMLINK libspdk_nbd.so 00:03:53.864 LIB libspdk_scsi.a 00:03:53.864 SO libspdk_scsi.so.9.0 00:03:53.864 SYMLINK libspdk_scsi.so 00:03:53.864 LIB libspdk_ublk.a 00:03:53.864 SO libspdk_ublk.so.3.0 00:03:53.864 SYMLINK libspdk_ublk.so 00:03:54.122 CC lib/vhost/vhost.o 00:03:54.122 CC lib/iscsi/conn.o 00:03:54.122 CC lib/vhost/vhost_rpc.o 00:03:54.122 CC lib/iscsi/init_grp.o 00:03:54.122 CC lib/vhost/vhost_scsi.o 00:03:54.122 CC lib/iscsi/iscsi.o 00:03:54.122 CC lib/vhost/vhost_blk.o 00:03:54.122 CC lib/iscsi/md5.o 00:03:54.122 CC lib/vhost/rte_vhost_user.o 00:03:54.122 CC lib/iscsi/param.o 00:03:54.122 CC lib/iscsi/portal_grp.o 00:03:54.122 CC lib/iscsi/tgt_node.o 00:03:54.122 CC lib/iscsi/iscsi_subsystem.o 00:03:54.122 CC lib/iscsi/iscsi_rpc.o 00:03:54.122 CC lib/iscsi/task.o 00:03:54.380 LIB libspdk_ftl.a 00:03:54.380 SO libspdk_ftl.so.9.0 00:03:54.946 SYMLINK libspdk_ftl.so 00:03:55.204 LIB libspdk_vhost.a 00:03:55.204 SO libspdk_vhost.so.8.0 00:03:55.462 SYMLINK libspdk_vhost.so 00:03:55.462 LIB libspdk_nvmf.a 00:03:55.462 LIB libspdk_iscsi.a 00:03:55.462 SO libspdk_nvmf.so.19.0 00:03:55.462 SO libspdk_iscsi.so.8.0 00:03:55.720 SYMLINK libspdk_iscsi.so 00:03:55.720 SYMLINK libspdk_nvmf.so 00:03:55.978 CC module/vfu_device/vfu_virtio.o 00:03:55.978 CC module/env_dpdk/env_dpdk_rpc.o 00:03:55.978 CC module/vfu_device/vfu_virtio_blk.o 00:03:55.978 CC module/vfu_device/vfu_virtio_scsi.o 00:03:55.978 CC module/vfu_device/vfu_virtio_rpc.o 00:03:55.978 CC module/blob/bdev/blob_bdev.o 00:03:55.978 CC module/accel/ioat/accel_ioat.o 00:03:55.978 CC module/keyring/file/keyring.o 00:03:55.978 CC module/scheduler/gscheduler/gscheduler.o 00:03:55.978 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:55.978 CC module/accel/dsa/accel_dsa.o 00:03:55.978 CC module/keyring/linux/keyring.o 00:03:55.978 CC module/keyring/file/keyring_rpc.o 00:03:55.978 CC module/accel/ioat/accel_ioat_rpc.o 00:03:55.978 CC module/keyring/linux/keyring_rpc.o 00:03:55.978 CC module/accel/dsa/accel_dsa_rpc.o 00:03:55.978 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:55.978 CC module/accel/iaa/accel_iaa.o 00:03:55.978 CC module/accel/iaa/accel_iaa_rpc.o 00:03:55.978 CC module/sock/posix/posix.o 00:03:55.978 CC module/accel/error/accel_error.o 00:03:55.978 CC module/accel/error/accel_error_rpc.o 00:03:56.237 LIB libspdk_env_dpdk_rpc.a 00:03:56.237 SO libspdk_env_dpdk_rpc.so.6.0 00:03:56.237 SYMLINK libspdk_env_dpdk_rpc.so 00:03:56.237 LIB libspdk_keyring_linux.a 00:03:56.237 LIB libspdk_scheduler_gscheduler.a 00:03:56.237 LIB libspdk_scheduler_dpdk_governor.a 00:03:56.237 SO libspdk_keyring_linux.so.1.0 00:03:56.237 LIB libspdk_keyring_file.a 00:03:56.237 SO libspdk_scheduler_gscheduler.so.4.0 00:03:56.237 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:56.237 LIB libspdk_accel_error.a 00:03:56.237 LIB libspdk_accel_ioat.a 00:03:56.237 LIB libspdk_accel_iaa.a 00:03:56.237 SO libspdk_keyring_file.so.1.0 00:03:56.237 LIB libspdk_scheduler_dynamic.a 00:03:56.237 SO libspdk_accel_error.so.2.0 00:03:56.237 SO libspdk_accel_ioat.so.6.0 00:03:56.237 SYMLINK libspdk_keyring_linux.so 00:03:56.237 SYMLINK libspdk_scheduler_gscheduler.so 00:03:56.237 SO libspdk_scheduler_dynamic.so.4.0 00:03:56.237 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:56.237 SO libspdk_accel_iaa.so.3.0 00:03:56.237 SYMLINK libspdk_keyring_file.so 00:03:56.237 LIB libspdk_accel_dsa.a 00:03:56.237 SYMLINK libspdk_accel_error.so 00:03:56.237 SYMLINK libspdk_accel_ioat.so 00:03:56.237 LIB libspdk_blob_bdev.a 00:03:56.495 SYMLINK libspdk_scheduler_dynamic.so 00:03:56.495 SO libspdk_accel_dsa.so.5.0 00:03:56.495 SYMLINK libspdk_accel_iaa.so 00:03:56.495 SO libspdk_blob_bdev.so.11.0 00:03:56.495 SYMLINK libspdk_accel_dsa.so 00:03:56.495 SYMLINK libspdk_blob_bdev.so 00:03:56.754 LIB libspdk_vfu_device.a 00:03:56.754 SO libspdk_vfu_device.so.3.0 00:03:56.754 CC module/bdev/malloc/bdev_malloc.o 00:03:56.754 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:56.754 CC module/bdev/gpt/gpt.o 00:03:56.754 CC module/bdev/delay/vbdev_delay.o 00:03:56.754 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:56.754 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:56.754 CC module/bdev/gpt/vbdev_gpt.o 00:03:56.754 CC module/blobfs/bdev/blobfs_bdev.o 00:03:56.754 CC module/bdev/error/vbdev_error.o 00:03:56.754 CC module/bdev/error/vbdev_error_rpc.o 00:03:56.754 CC module/bdev/nvme/bdev_nvme.o 00:03:56.754 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:56.754 CC module/bdev/nvme/nvme_rpc.o 00:03:56.754 CC module/bdev/aio/bdev_aio.o 00:03:56.754 CC module/bdev/split/vbdev_split.o 00:03:56.754 CC module/bdev/raid/bdev_raid.o 00:03:56.754 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:56.754 CC module/bdev/aio/bdev_aio_rpc.o 00:03:56.754 CC module/bdev/nvme/bdev_mdns_client.o 00:03:56.754 CC module/bdev/null/bdev_null.o 00:03:56.754 CC module/bdev/raid/bdev_raid_rpc.o 00:03:56.754 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:56.754 CC module/bdev/nvme/vbdev_opal.o 00:03:56.754 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:56.754 CC module/bdev/lvol/vbdev_lvol.o 00:03:56.754 CC module/bdev/split/vbdev_split_rpc.o 00:03:56.754 CC module/bdev/null/bdev_null_rpc.o 00:03:56.754 CC module/bdev/raid/bdev_raid_sb.o 00:03:56.754 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:56.754 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:56.754 CC module/bdev/raid/raid0.o 00:03:56.754 CC module/bdev/iscsi/bdev_iscsi.o 00:03:56.754 CC module/bdev/raid/raid1.o 00:03:56.754 CC module/bdev/passthru/vbdev_passthru.o 00:03:56.754 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:56.754 CC module/bdev/raid/concat.o 00:03:56.754 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:56.754 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:56.754 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:56.754 CC module/bdev/ftl/bdev_ftl.o 00:03:56.754 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:56.754 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:56.754 SYMLINK libspdk_vfu_device.so 00:03:57.012 LIB libspdk_sock_posix.a 00:03:57.012 SO libspdk_sock_posix.so.6.0 00:03:57.012 LIB libspdk_blobfs_bdev.a 00:03:57.012 SO libspdk_blobfs_bdev.so.6.0 00:03:57.012 SYMLINK libspdk_sock_posix.so 00:03:57.012 LIB libspdk_bdev_ftl.a 00:03:57.270 SO libspdk_bdev_ftl.so.6.0 00:03:57.270 SYMLINK libspdk_blobfs_bdev.so 00:03:57.270 LIB libspdk_bdev_split.a 00:03:57.270 LIB libspdk_bdev_gpt.a 00:03:57.270 SO libspdk_bdev_split.so.6.0 00:03:57.270 SO libspdk_bdev_gpt.so.6.0 00:03:57.270 LIB libspdk_bdev_null.a 00:03:57.270 LIB libspdk_bdev_error.a 00:03:57.270 SYMLINK libspdk_bdev_ftl.so 00:03:57.270 SO libspdk_bdev_error.so.6.0 00:03:57.270 SO libspdk_bdev_null.so.6.0 00:03:57.270 LIB libspdk_bdev_aio.a 00:03:57.270 SYMLINK libspdk_bdev_split.so 00:03:57.270 SYMLINK libspdk_bdev_gpt.so 00:03:57.270 LIB libspdk_bdev_iscsi.a 00:03:57.270 LIB libspdk_bdev_passthru.a 00:03:57.270 SO libspdk_bdev_aio.so.6.0 00:03:57.270 LIB libspdk_bdev_delay.a 00:03:57.270 LIB libspdk_bdev_zone_block.a 00:03:57.270 SYMLINK libspdk_bdev_error.so 00:03:57.270 SYMLINK libspdk_bdev_null.so 00:03:57.270 SO libspdk_bdev_iscsi.so.6.0 00:03:57.270 SO libspdk_bdev_passthru.so.6.0 00:03:57.270 SO libspdk_bdev_delay.so.6.0 00:03:57.270 SO libspdk_bdev_zone_block.so.6.0 00:03:57.270 LIB libspdk_bdev_malloc.a 00:03:57.270 SYMLINK libspdk_bdev_aio.so 00:03:57.270 SO libspdk_bdev_malloc.so.6.0 00:03:57.270 SYMLINK libspdk_bdev_iscsi.so 00:03:57.270 SYMLINK libspdk_bdev_passthru.so 00:03:57.270 SYMLINK libspdk_bdev_delay.so 00:03:57.270 SYMLINK libspdk_bdev_zone_block.so 00:03:57.528 SYMLINK libspdk_bdev_malloc.so 00:03:57.528 LIB libspdk_bdev_lvol.a 00:03:57.528 LIB libspdk_bdev_virtio.a 00:03:57.528 SO libspdk_bdev_lvol.so.6.0 00:03:57.528 SO libspdk_bdev_virtio.so.6.0 00:03:57.528 SYMLINK libspdk_bdev_lvol.so 00:03:57.528 SYMLINK libspdk_bdev_virtio.so 00:03:57.786 LIB libspdk_bdev_raid.a 00:03:58.044 SO libspdk_bdev_raid.so.6.0 00:03:58.044 SYMLINK libspdk_bdev_raid.so 00:03:58.978 LIB libspdk_bdev_nvme.a 00:03:59.237 SO libspdk_bdev_nvme.so.7.0 00:03:59.237 SYMLINK libspdk_bdev_nvme.so 00:03:59.495 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:59.495 CC module/event/subsystems/iobuf/iobuf.o 00:03:59.495 CC module/event/subsystems/scheduler/scheduler.o 00:03:59.495 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:59.495 CC module/event/subsystems/keyring/keyring.o 00:03:59.495 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:59.495 CC module/event/subsystems/sock/sock.o 00:03:59.495 CC module/event/subsystems/vmd/vmd.o 00:03:59.495 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:59.755 LIB libspdk_event_keyring.a 00:03:59.755 LIB libspdk_event_vhost_blk.a 00:03:59.755 LIB libspdk_event_scheduler.a 00:03:59.755 LIB libspdk_event_vfu_tgt.a 00:03:59.755 LIB libspdk_event_sock.a 00:03:59.755 LIB libspdk_event_vmd.a 00:03:59.755 LIB libspdk_event_iobuf.a 00:03:59.755 SO libspdk_event_keyring.so.1.0 00:03:59.755 SO libspdk_event_vhost_blk.so.3.0 00:03:59.755 SO libspdk_event_vfu_tgt.so.3.0 00:03:59.755 SO libspdk_event_sock.so.5.0 00:03:59.755 SO libspdk_event_scheduler.so.4.0 00:03:59.755 SO libspdk_event_vmd.so.6.0 00:03:59.755 SO libspdk_event_iobuf.so.3.0 00:03:59.755 SYMLINK libspdk_event_keyring.so 00:03:59.755 SYMLINK libspdk_event_vhost_blk.so 00:03:59.755 SYMLINK libspdk_event_vfu_tgt.so 00:03:59.755 SYMLINK libspdk_event_sock.so 00:03:59.755 SYMLINK libspdk_event_scheduler.so 00:03:59.755 SYMLINK libspdk_event_vmd.so 00:03:59.755 SYMLINK libspdk_event_iobuf.so 00:04:00.013 CC module/event/subsystems/accel/accel.o 00:04:00.272 LIB libspdk_event_accel.a 00:04:00.272 SO libspdk_event_accel.so.6.0 00:04:00.272 SYMLINK libspdk_event_accel.so 00:04:00.530 CC module/event/subsystems/bdev/bdev.o 00:04:00.530 LIB libspdk_event_bdev.a 00:04:00.530 SO libspdk_event_bdev.so.6.0 00:04:00.789 SYMLINK libspdk_event_bdev.so 00:04:00.789 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:00.789 CC module/event/subsystems/nbd/nbd.o 00:04:00.789 CC module/event/subsystems/ublk/ublk.o 00:04:00.789 CC module/event/subsystems/scsi/scsi.o 00:04:00.789 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:01.047 LIB libspdk_event_ublk.a 00:04:01.047 LIB libspdk_event_nbd.a 00:04:01.047 LIB libspdk_event_scsi.a 00:04:01.047 SO libspdk_event_ublk.so.3.0 00:04:01.047 SO libspdk_event_nbd.so.6.0 00:04:01.047 SO libspdk_event_scsi.so.6.0 00:04:01.047 SYMLINK libspdk_event_ublk.so 00:04:01.047 SYMLINK libspdk_event_nbd.so 00:04:01.047 SYMLINK libspdk_event_scsi.so 00:04:01.047 LIB libspdk_event_nvmf.a 00:04:01.047 SO libspdk_event_nvmf.so.6.0 00:04:01.047 SYMLINK libspdk_event_nvmf.so 00:04:01.304 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:01.304 CC module/event/subsystems/iscsi/iscsi.o 00:04:01.304 LIB libspdk_event_vhost_scsi.a 00:04:01.304 SO libspdk_event_vhost_scsi.so.3.0 00:04:01.304 LIB libspdk_event_iscsi.a 00:04:01.304 SO libspdk_event_iscsi.so.6.0 00:04:01.562 SYMLINK libspdk_event_vhost_scsi.so 00:04:01.562 SYMLINK libspdk_event_iscsi.so 00:04:01.562 SO libspdk.so.6.0 00:04:01.562 SYMLINK libspdk.so 00:04:01.826 CC app/trace_record/trace_record.o 00:04:01.826 CC app/spdk_top/spdk_top.o 00:04:01.826 CXX app/trace/trace.o 00:04:01.826 CC app/spdk_lspci/spdk_lspci.o 00:04:01.826 CC app/spdk_nvme_perf/perf.o 00:04:01.826 CC test/rpc_client/rpc_client_test.o 00:04:01.826 CC app/spdk_nvme_identify/identify.o 00:04:01.826 CC app/spdk_nvme_discover/discovery_aer.o 00:04:01.826 TEST_HEADER include/spdk/accel.h 00:04:01.826 TEST_HEADER include/spdk/accel_module.h 00:04:01.826 TEST_HEADER include/spdk/assert.h 00:04:01.826 TEST_HEADER include/spdk/barrier.h 00:04:01.826 TEST_HEADER include/spdk/base64.h 00:04:01.826 TEST_HEADER include/spdk/bdev.h 00:04:01.826 TEST_HEADER include/spdk/bdev_module.h 00:04:01.826 TEST_HEADER include/spdk/bdev_zone.h 00:04:01.826 TEST_HEADER include/spdk/bit_array.h 00:04:01.826 TEST_HEADER include/spdk/blob_bdev.h 00:04:01.826 TEST_HEADER include/spdk/bit_pool.h 00:04:01.826 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:01.826 TEST_HEADER include/spdk/blob.h 00:04:01.826 TEST_HEADER include/spdk/blobfs.h 00:04:01.826 TEST_HEADER include/spdk/config.h 00:04:01.826 TEST_HEADER include/spdk/conf.h 00:04:01.826 TEST_HEADER include/spdk/cpuset.h 00:04:01.826 TEST_HEADER include/spdk/crc16.h 00:04:01.826 TEST_HEADER include/spdk/crc32.h 00:04:01.826 TEST_HEADER include/spdk/crc64.h 00:04:01.826 TEST_HEADER include/spdk/dif.h 00:04:01.826 TEST_HEADER include/spdk/dma.h 00:04:01.826 TEST_HEADER include/spdk/endian.h 00:04:01.826 TEST_HEADER include/spdk/env_dpdk.h 00:04:01.826 TEST_HEADER include/spdk/env.h 00:04:01.826 TEST_HEADER include/spdk/event.h 00:04:01.826 TEST_HEADER include/spdk/fd_group.h 00:04:01.826 TEST_HEADER include/spdk/fd.h 00:04:01.826 TEST_HEADER include/spdk/file.h 00:04:01.826 TEST_HEADER include/spdk/ftl.h 00:04:01.826 TEST_HEADER include/spdk/gpt_spec.h 00:04:01.826 TEST_HEADER include/spdk/hexlify.h 00:04:01.826 TEST_HEADER include/spdk/histogram_data.h 00:04:01.826 TEST_HEADER include/spdk/idxd.h 00:04:01.826 TEST_HEADER include/spdk/idxd_spec.h 00:04:01.826 TEST_HEADER include/spdk/init.h 00:04:01.826 TEST_HEADER include/spdk/ioat.h 00:04:01.826 TEST_HEADER include/spdk/ioat_spec.h 00:04:01.826 TEST_HEADER include/spdk/iscsi_spec.h 00:04:01.826 TEST_HEADER include/spdk/json.h 00:04:01.826 TEST_HEADER include/spdk/jsonrpc.h 00:04:01.826 TEST_HEADER include/spdk/keyring.h 00:04:01.826 TEST_HEADER include/spdk/keyring_module.h 00:04:01.826 TEST_HEADER include/spdk/likely.h 00:04:01.826 TEST_HEADER include/spdk/log.h 00:04:01.826 TEST_HEADER include/spdk/lvol.h 00:04:01.826 TEST_HEADER include/spdk/memory.h 00:04:01.826 TEST_HEADER include/spdk/mmio.h 00:04:01.826 TEST_HEADER include/spdk/nbd.h 00:04:01.826 TEST_HEADER include/spdk/net.h 00:04:01.826 TEST_HEADER include/spdk/notify.h 00:04:01.826 TEST_HEADER include/spdk/nvme_intel.h 00:04:01.826 TEST_HEADER include/spdk/nvme.h 00:04:01.826 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:01.826 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:01.826 TEST_HEADER include/spdk/nvme_spec.h 00:04:01.826 TEST_HEADER include/spdk/nvme_zns.h 00:04:01.826 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:01.826 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:01.826 TEST_HEADER include/spdk/nvmf.h 00:04:01.826 TEST_HEADER include/spdk/nvmf_spec.h 00:04:01.826 TEST_HEADER include/spdk/nvmf_transport.h 00:04:01.826 TEST_HEADER include/spdk/opal.h 00:04:01.826 TEST_HEADER include/spdk/opal_spec.h 00:04:01.826 TEST_HEADER include/spdk/pci_ids.h 00:04:01.826 TEST_HEADER include/spdk/queue.h 00:04:01.826 TEST_HEADER include/spdk/pipe.h 00:04:01.826 TEST_HEADER include/spdk/reduce.h 00:04:01.826 TEST_HEADER include/spdk/rpc.h 00:04:01.826 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:01.826 TEST_HEADER include/spdk/scheduler.h 00:04:01.826 TEST_HEADER include/spdk/scsi.h 00:04:01.826 TEST_HEADER include/spdk/scsi_spec.h 00:04:01.826 TEST_HEADER include/spdk/sock.h 00:04:01.826 TEST_HEADER include/spdk/stdinc.h 00:04:01.826 TEST_HEADER include/spdk/thread.h 00:04:01.826 TEST_HEADER include/spdk/string.h 00:04:01.826 TEST_HEADER include/spdk/trace.h 00:04:01.826 TEST_HEADER include/spdk/trace_parser.h 00:04:01.826 TEST_HEADER include/spdk/tree.h 00:04:01.826 TEST_HEADER include/spdk/ublk.h 00:04:01.826 TEST_HEADER include/spdk/util.h 00:04:01.826 TEST_HEADER include/spdk/uuid.h 00:04:01.826 TEST_HEADER include/spdk/version.h 00:04:01.826 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:01.826 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:01.826 CC app/spdk_dd/spdk_dd.o 00:04:01.826 TEST_HEADER include/spdk/vhost.h 00:04:01.826 TEST_HEADER include/spdk/vmd.h 00:04:01.826 TEST_HEADER include/spdk/xor.h 00:04:01.826 TEST_HEADER include/spdk/zipf.h 00:04:01.826 CXX test/cpp_headers/accel.o 00:04:01.826 CXX test/cpp_headers/accel_module.o 00:04:01.826 CXX test/cpp_headers/assert.o 00:04:01.826 CXX test/cpp_headers/barrier.o 00:04:01.826 CXX test/cpp_headers/base64.o 00:04:01.826 CXX test/cpp_headers/bdev.o 00:04:01.826 CXX test/cpp_headers/bdev_module.o 00:04:01.826 CXX test/cpp_headers/bdev_zone.o 00:04:01.826 CXX test/cpp_headers/bit_array.o 00:04:01.826 CXX test/cpp_headers/bit_pool.o 00:04:01.826 CXX test/cpp_headers/blob_bdev.o 00:04:01.826 CXX test/cpp_headers/blobfs_bdev.o 00:04:01.826 CXX test/cpp_headers/blobfs.o 00:04:01.826 CXX test/cpp_headers/blob.o 00:04:01.826 CXX test/cpp_headers/conf.o 00:04:01.826 CXX test/cpp_headers/config.o 00:04:01.826 CXX test/cpp_headers/cpuset.o 00:04:01.826 CXX test/cpp_headers/crc16.o 00:04:01.826 CC app/nvmf_tgt/nvmf_main.o 00:04:01.826 CC app/iscsi_tgt/iscsi_tgt.o 00:04:01.826 CXX test/cpp_headers/crc32.o 00:04:01.826 CC app/spdk_tgt/spdk_tgt.o 00:04:01.826 CC test/env/vtophys/vtophys.o 00:04:01.826 CC test/thread/poller_perf/poller_perf.o 00:04:01.826 CC examples/ioat/verify/verify.o 00:04:01.826 CC examples/util/zipf/zipf.o 00:04:01.826 CC test/env/pci/pci_ut.o 00:04:01.826 CC app/fio/nvme/fio_plugin.o 00:04:01.826 CC test/app/jsoncat/jsoncat.o 00:04:01.826 CC test/app/histogram_perf/histogram_perf.o 00:04:01.826 CC examples/ioat/perf/perf.o 00:04:01.826 CC test/env/memory/memory_ut.o 00:04:01.826 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:01.826 CC test/app/stub/stub.o 00:04:02.085 CC test/dma/test_dma/test_dma.o 00:04:02.085 CC test/app/bdev_svc/bdev_svc.o 00:04:02.085 CC app/fio/bdev/fio_plugin.o 00:04:02.085 LINK spdk_lspci 00:04:02.085 CC test/env/mem_callbacks/mem_callbacks.o 00:04:02.085 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:02.085 LINK rpc_client_test 00:04:02.085 LINK spdk_nvme_discover 00:04:02.349 LINK interrupt_tgt 00:04:02.349 LINK vtophys 00:04:02.349 LINK poller_perf 00:04:02.349 LINK jsoncat 00:04:02.349 LINK zipf 00:04:02.349 LINK histogram_perf 00:04:02.349 LINK nvmf_tgt 00:04:02.349 CXX test/cpp_headers/crc64.o 00:04:02.349 LINK spdk_trace_record 00:04:02.349 CXX test/cpp_headers/dif.o 00:04:02.349 CXX test/cpp_headers/dma.o 00:04:02.349 CXX test/cpp_headers/endian.o 00:04:02.349 CXX test/cpp_headers/env_dpdk.o 00:04:02.349 CXX test/cpp_headers/env.o 00:04:02.349 LINK env_dpdk_post_init 00:04:02.349 CXX test/cpp_headers/event.o 00:04:02.349 CXX test/cpp_headers/fd_group.o 00:04:02.349 CXX test/cpp_headers/fd.o 00:04:02.349 CXX test/cpp_headers/file.o 00:04:02.349 LINK iscsi_tgt 00:04:02.349 LINK stub 00:04:02.349 CXX test/cpp_headers/ftl.o 00:04:02.349 CXX test/cpp_headers/gpt_spec.o 00:04:02.349 CXX test/cpp_headers/hexlify.o 00:04:02.349 CXX test/cpp_headers/histogram_data.o 00:04:02.349 CXX test/cpp_headers/idxd.o 00:04:02.349 CXX test/cpp_headers/idxd_spec.o 00:04:02.349 LINK ioat_perf 00:04:02.349 LINK bdev_svc 00:04:02.349 LINK verify 00:04:02.349 LINK spdk_tgt 00:04:02.349 CXX test/cpp_headers/init.o 00:04:02.349 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:02.349 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:02.609 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:02.609 CXX test/cpp_headers/ioat.o 00:04:02.609 CXX test/cpp_headers/ioat_spec.o 00:04:02.609 LINK spdk_dd 00:04:02.609 LINK spdk_trace 00:04:02.609 CXX test/cpp_headers/iscsi_spec.o 00:04:02.609 CXX test/cpp_headers/json.o 00:04:02.609 CXX test/cpp_headers/jsonrpc.o 00:04:02.609 CXX test/cpp_headers/keyring.o 00:04:02.609 CXX test/cpp_headers/keyring_module.o 00:04:02.609 CXX test/cpp_headers/likely.o 00:04:02.609 CXX test/cpp_headers/log.o 00:04:02.609 LINK pci_ut 00:04:02.609 CXX test/cpp_headers/lvol.o 00:04:02.609 CXX test/cpp_headers/memory.o 00:04:02.609 CXX test/cpp_headers/mmio.o 00:04:02.609 CXX test/cpp_headers/nbd.o 00:04:02.609 CXX test/cpp_headers/net.o 00:04:02.609 CXX test/cpp_headers/notify.o 00:04:02.609 CXX test/cpp_headers/nvme.o 00:04:02.869 CXX test/cpp_headers/nvme_intel.o 00:04:02.869 CXX test/cpp_headers/nvme_ocssd.o 00:04:02.869 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:02.869 CXX test/cpp_headers/nvme_spec.o 00:04:02.869 LINK test_dma 00:04:02.869 CXX test/cpp_headers/nvme_zns.o 00:04:02.869 CXX test/cpp_headers/nvmf_cmd.o 00:04:02.869 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:02.869 CXX test/cpp_headers/nvmf.o 00:04:02.869 CXX test/cpp_headers/nvmf_spec.o 00:04:02.869 CXX test/cpp_headers/nvmf_transport.o 00:04:02.869 CXX test/cpp_headers/opal.o 00:04:02.869 CXX test/cpp_headers/opal_spec.o 00:04:02.869 CC test/event/event_perf/event_perf.o 00:04:02.869 CC test/event/reactor/reactor.o 00:04:02.869 LINK nvme_fuzz 00:04:02.869 CXX test/cpp_headers/pci_ids.o 00:04:02.869 LINK spdk_bdev 00:04:03.133 CXX test/cpp_headers/pipe.o 00:04:03.133 CC test/event/reactor_perf/reactor_perf.o 00:04:03.133 CXX test/cpp_headers/queue.o 00:04:03.133 CC examples/sock/hello_world/hello_sock.o 00:04:03.133 CC examples/vmd/lsvmd/lsvmd.o 00:04:03.133 CXX test/cpp_headers/reduce.o 00:04:03.133 CXX test/cpp_headers/rpc.o 00:04:03.133 CC examples/idxd/perf/perf.o 00:04:03.133 CXX test/cpp_headers/scheduler.o 00:04:03.133 CXX test/cpp_headers/scsi.o 00:04:03.133 CXX test/cpp_headers/scsi_spec.o 00:04:03.133 LINK spdk_nvme 00:04:03.133 CC examples/thread/thread/thread_ex.o 00:04:03.133 CC test/event/app_repeat/app_repeat.o 00:04:03.133 CXX test/cpp_headers/sock.o 00:04:03.133 CXX test/cpp_headers/stdinc.o 00:04:03.133 CXX test/cpp_headers/string.o 00:04:03.133 CXX test/cpp_headers/thread.o 00:04:03.133 CXX test/cpp_headers/trace.o 00:04:03.133 CC test/event/scheduler/scheduler.o 00:04:03.133 CXX test/cpp_headers/trace_parser.o 00:04:03.133 CC examples/vmd/led/led.o 00:04:03.133 CXX test/cpp_headers/tree.o 00:04:03.133 CXX test/cpp_headers/ublk.o 00:04:03.133 CXX test/cpp_headers/util.o 00:04:03.133 CXX test/cpp_headers/uuid.o 00:04:03.133 CXX test/cpp_headers/version.o 00:04:03.133 CXX test/cpp_headers/vfio_user_pci.o 00:04:03.133 CXX test/cpp_headers/vfio_user_spec.o 00:04:03.133 CXX test/cpp_headers/vhost.o 00:04:03.394 LINK event_perf 00:04:03.394 CXX test/cpp_headers/vmd.o 00:04:03.394 LINK reactor 00:04:03.394 CXX test/cpp_headers/xor.o 00:04:03.394 CXX test/cpp_headers/zipf.o 00:04:03.394 LINK spdk_nvme_perf 00:04:03.394 LINK mem_callbacks 00:04:03.394 CC app/vhost/vhost.o 00:04:03.394 LINK reactor_perf 00:04:03.394 LINK lsvmd 00:04:03.394 LINK vhost_fuzz 00:04:03.394 LINK spdk_nvme_identify 00:04:03.394 LINK app_repeat 00:04:03.394 LINK spdk_top 00:04:03.394 LINK led 00:04:03.653 LINK hello_sock 00:04:03.653 CC test/nvme/reset/reset.o 00:04:03.653 CC test/nvme/overhead/overhead.o 00:04:03.653 CC test/nvme/err_injection/err_injection.o 00:04:03.653 CC test/nvme/sgl/sgl.o 00:04:03.653 CC test/nvme/e2edp/nvme_dp.o 00:04:03.653 CC test/nvme/startup/startup.o 00:04:03.653 CC test/nvme/reserve/reserve.o 00:04:03.653 CC test/blobfs/mkfs/mkfs.o 00:04:03.653 CC test/nvme/simple_copy/simple_copy.o 00:04:03.653 CC test/nvme/aer/aer.o 00:04:03.653 CC test/accel/dif/dif.o 00:04:03.653 CC test/nvme/connect_stress/connect_stress.o 00:04:03.653 CC test/nvme/boot_partition/boot_partition.o 00:04:03.653 LINK thread 00:04:03.653 LINK scheduler 00:04:03.653 CC test/nvme/compliance/nvme_compliance.o 00:04:03.653 CC test/nvme/fused_ordering/fused_ordering.o 00:04:03.653 CC test/lvol/esnap/esnap.o 00:04:03.653 CC test/nvme/fdp/fdp.o 00:04:03.653 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:03.653 CC test/nvme/cuse/cuse.o 00:04:03.653 LINK idxd_perf 00:04:03.653 LINK vhost 00:04:03.912 LINK startup 00:04:03.912 LINK err_injection 00:04:03.912 LINK connect_stress 00:04:03.912 LINK fused_ordering 00:04:03.912 LINK boot_partition 00:04:03.912 LINK mkfs 00:04:03.912 LINK aer 00:04:03.912 LINK simple_copy 00:04:03.912 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:03.912 CC examples/nvme/arbitration/arbitration.o 00:04:03.912 CC examples/nvme/reconnect/reconnect.o 00:04:03.912 CC examples/nvme/abort/abort.o 00:04:03.912 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:03.912 CC examples/nvme/hotplug/hotplug.o 00:04:03.912 LINK reserve 00:04:03.912 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:03.912 CC examples/nvme/hello_world/hello_world.o 00:04:03.912 LINK doorbell_aers 00:04:03.912 LINK nvme_compliance 00:04:04.171 LINK sgl 00:04:04.171 LINK overhead 00:04:04.171 CC examples/accel/perf/accel_perf.o 00:04:04.171 LINK reset 00:04:04.171 LINK fdp 00:04:04.171 LINK nvme_dp 00:04:04.171 CC examples/blob/hello_world/hello_blob.o 00:04:04.171 CC examples/blob/cli/blobcli.o 00:04:04.171 LINK memory_ut 00:04:04.171 LINK dif 00:04:04.429 LINK pmr_persistence 00:04:04.429 LINK cmb_copy 00:04:04.429 LINK hello_world 00:04:04.429 LINK hotplug 00:04:04.429 LINK abort 00:04:04.429 LINK arbitration 00:04:04.429 LINK reconnect 00:04:04.429 LINK hello_blob 00:04:04.429 LINK nvme_manage 00:04:04.687 LINK accel_perf 00:04:04.687 CC test/bdev/bdevio/bdevio.o 00:04:04.687 LINK blobcli 00:04:04.946 LINK iscsi_fuzz 00:04:04.946 CC examples/bdev/hello_world/hello_bdev.o 00:04:04.946 CC examples/bdev/bdevperf/bdevperf.o 00:04:04.946 LINK bdevio 00:04:05.204 LINK hello_bdev 00:04:05.204 LINK cuse 00:04:05.803 LINK bdevperf 00:04:06.061 CC examples/nvmf/nvmf/nvmf.o 00:04:06.319 LINK nvmf 00:04:08.849 LINK esnap 00:04:09.107 00:04:09.107 real 0m49.032s 00:04:09.107 user 10m9.328s 00:04:09.107 sys 2m29.634s 00:04:09.107 18:55:01 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:09.108 18:55:01 make -- common/autotest_common.sh@10 -- $ set +x 00:04:09.108 ************************************ 00:04:09.108 END TEST make 00:04:09.108 ************************************ 00:04:09.108 18:55:01 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:09.108 18:55:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:09.108 18:55:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:09.108 18:55:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.108 18:55:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:09.108 18:55:01 -- pm/common@44 -- $ pid=682893 00:04:09.108 18:55:01 -- pm/common@50 -- $ kill -TERM 682893 00:04:09.108 18:55:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.108 18:55:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:09.108 18:55:01 -- pm/common@44 -- $ pid=682895 00:04:09.108 18:55:01 -- pm/common@50 -- $ kill -TERM 682895 00:04:09.108 18:55:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.108 18:55:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:09.108 18:55:01 -- pm/common@44 -- $ pid=682897 00:04:09.108 18:55:01 -- pm/common@50 -- $ kill -TERM 682897 00:04:09.108 18:55:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.108 18:55:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:09.108 18:55:01 -- pm/common@44 -- $ pid=682926 00:04:09.108 18:55:01 -- pm/common@50 -- $ sudo -E kill -TERM 682926 00:04:09.108 18:55:01 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:09.108 18:55:01 -- nvmf/common.sh@7 -- # uname -s 00:04:09.108 18:55:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:09.108 18:55:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:09.108 18:55:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:09.108 18:55:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:09.108 18:55:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:09.108 18:55:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:09.108 18:55:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:09.108 18:55:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:09.108 18:55:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:09.108 18:55:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:09.108 18:55:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:09.108 18:55:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:09.108 18:55:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:09.108 18:55:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:09.108 18:55:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:09.108 18:55:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:09.108 18:55:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:09.108 18:55:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:09.108 18:55:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:09.108 18:55:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:09.108 18:55:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.108 18:55:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.108 18:55:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.108 18:55:01 -- paths/export.sh@5 -- # export PATH 00:04:09.108 18:55:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.108 18:55:01 -- nvmf/common.sh@47 -- # : 0 00:04:09.108 18:55:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:09.108 18:55:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:09.108 18:55:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:09.108 18:55:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:09.108 18:55:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:09.108 18:55:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:09.108 18:55:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:09.108 18:55:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:09.108 18:55:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:09.108 18:55:01 -- spdk/autotest.sh@32 -- # uname -s 00:04:09.108 18:55:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:09.108 18:55:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:09.108 18:55:01 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:09.108 18:55:01 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:09.108 18:55:01 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:09.108 18:55:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:09.108 18:55:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:09.108 18:55:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:09.108 18:55:01 -- spdk/autotest.sh@48 -- # udevadm_pid=738892 00:04:09.108 18:55:01 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:09.108 18:55:01 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:09.108 18:55:01 -- pm/common@17 -- # local monitor 00:04:09.108 18:55:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.108 18:55:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.108 18:55:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.108 18:55:01 -- pm/common@21 -- # date +%s 00:04:09.108 18:55:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.108 18:55:01 -- pm/common@21 -- # date +%s 00:04:09.108 18:55:01 -- pm/common@25 -- # sleep 1 00:04:09.108 18:55:01 -- pm/common@21 -- # date +%s 00:04:09.108 18:55:01 -- pm/common@21 -- # date +%s 00:04:09.108 18:55:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721926501 00:04:09.108 18:55:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721926501 00:04:09.108 18:55:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721926501 00:04:09.108 18:55:01 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721926501 00:04:09.108 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721926501_collect-vmstat.pm.log 00:04:09.108 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721926501_collect-cpu-load.pm.log 00:04:09.108 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721926501_collect-cpu-temp.pm.log 00:04:09.108 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721926501_collect-bmc-pm.bmc.pm.log 00:04:10.487 18:55:02 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:10.487 18:55:02 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:10.487 18:55:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:10.487 18:55:02 -- common/autotest_common.sh@10 -- # set +x 00:04:10.487 18:55:02 -- spdk/autotest.sh@59 -- # create_test_list 00:04:10.487 18:55:02 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:10.487 18:55:02 -- common/autotest_common.sh@10 -- # set +x 00:04:10.487 18:55:02 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:10.487 18:55:02 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:10.487 18:55:02 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:10.487 18:55:02 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:10.487 18:55:02 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:10.487 18:55:02 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:10.487 18:55:02 -- common/autotest_common.sh@1455 -- # uname 00:04:10.487 18:55:02 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:10.487 18:55:02 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:10.487 18:55:02 -- common/autotest_common.sh@1475 -- # uname 00:04:10.487 18:55:02 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:10.487 18:55:02 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:10.487 18:55:02 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:10.487 18:55:02 -- spdk/autotest.sh@72 -- # hash lcov 00:04:10.487 18:55:02 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:10.487 18:55:02 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:10.487 --rc lcov_branch_coverage=1 00:04:10.487 --rc lcov_function_coverage=1 00:04:10.487 --rc genhtml_branch_coverage=1 00:04:10.487 --rc genhtml_function_coverage=1 00:04:10.487 --rc genhtml_legend=1 00:04:10.487 --rc geninfo_all_blocks=1 00:04:10.487 ' 00:04:10.487 18:55:02 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:10.487 --rc lcov_branch_coverage=1 00:04:10.487 --rc lcov_function_coverage=1 00:04:10.487 --rc genhtml_branch_coverage=1 00:04:10.487 --rc genhtml_function_coverage=1 00:04:10.487 --rc genhtml_legend=1 00:04:10.487 --rc geninfo_all_blocks=1 00:04:10.487 ' 00:04:10.487 18:55:02 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:10.487 --rc lcov_branch_coverage=1 00:04:10.487 --rc lcov_function_coverage=1 00:04:10.487 --rc genhtml_branch_coverage=1 00:04:10.487 --rc genhtml_function_coverage=1 00:04:10.487 --rc genhtml_legend=1 00:04:10.487 --rc geninfo_all_blocks=1 00:04:10.487 --no-external' 00:04:10.487 18:55:02 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:10.487 --rc lcov_branch_coverage=1 00:04:10.487 --rc lcov_function_coverage=1 00:04:10.487 --rc genhtml_branch_coverage=1 00:04:10.487 --rc genhtml_function_coverage=1 00:04:10.487 --rc genhtml_legend=1 00:04:10.487 --rc geninfo_all_blocks=1 00:04:10.487 --no-external' 00:04:10.487 18:55:02 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:10.487 lcov: LCOV version 1.14 00:04:10.487 18:55:02 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:28.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:28.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:40.753 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:40.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:40.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:40.754 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:44.932 18:55:36 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:44.932 18:55:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:44.932 18:55:36 -- common/autotest_common.sh@10 -- # set +x 00:04:44.932 18:55:36 -- spdk/autotest.sh@91 -- # rm -f 00:04:44.932 18:55:36 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:45.496 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:45.496 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:45.496 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:45.496 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:45.496 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:45.496 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:45.496 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:45.496 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:45.496 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:04:45.755 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:45.755 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:45.755 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:45.755 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:45.755 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:45.755 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:45.755 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:45.755 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:45.755 18:55:38 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:45.755 18:55:38 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:45.755 18:55:38 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:45.755 18:55:38 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:45.755 18:55:38 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:45.755 18:55:38 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:45.755 18:55:38 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:45.755 18:55:38 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:45.755 18:55:38 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:45.755 18:55:38 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:45.755 18:55:38 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:45.755 18:55:38 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:45.755 18:55:38 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:45.755 18:55:38 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:45.755 18:55:38 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:46.012 No valid GPT data, bailing 00:04:46.012 18:55:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:46.012 18:55:38 -- scripts/common.sh@391 -- # pt= 00:04:46.012 18:55:38 -- scripts/common.sh@392 -- # return 1 00:04:46.012 18:55:38 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:46.012 1+0 records in 00:04:46.012 1+0 records out 00:04:46.012 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0019424 s, 540 MB/s 00:04:46.012 18:55:38 -- spdk/autotest.sh@118 -- # sync 00:04:46.012 18:55:38 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:46.013 18:55:38 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:46.013 18:55:38 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:47.912 18:55:39 -- spdk/autotest.sh@124 -- # uname -s 00:04:47.912 18:55:39 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:47.912 18:55:39 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:47.912 18:55:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.912 18:55:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.912 18:55:39 -- common/autotest_common.sh@10 -- # set +x 00:04:47.912 ************************************ 00:04:47.912 START TEST setup.sh 00:04:47.912 ************************************ 00:04:47.912 18:55:39 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:47.912 * Looking for test storage... 00:04:47.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:47.912 18:55:40 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:47.912 18:55:40 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:47.912 18:55:40 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:47.912 18:55:40 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.912 18:55:40 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.912 18:55:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:47.912 ************************************ 00:04:47.912 START TEST acl 00:04:47.912 ************************************ 00:04:47.912 18:55:40 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:47.912 * Looking for test storage... 00:04:47.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:47.912 18:55:40 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:47.912 18:55:40 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:47.912 18:55:40 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:47.912 18:55:40 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:47.912 18:55:40 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:47.912 18:55:40 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:47.912 18:55:40 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:47.912 18:55:40 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:47.912 18:55:40 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:47.912 18:55:40 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:47.912 18:55:40 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:47.912 18:55:40 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:47.912 18:55:40 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:47.912 18:55:40 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:47.912 18:55:40 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:47.912 18:55:40 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:49.285 18:55:41 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:49.285 18:55:41 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:49.285 18:55:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:49.285 18:55:41 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:49.285 18:55:41 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.285 18:55:41 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:50.692 Hugepages 00:04:50.692 node hugesize free / total 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.692 00:04:50.692 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:0b:00.0 == *:*:*.* ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.692 18:55:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.950 18:55:43 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:50.950 18:55:43 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:50.950 18:55:43 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.950 18:55:43 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.950 18:55:43 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:50.950 ************************************ 00:04:50.950 START TEST denied 00:04:50.950 ************************************ 00:04:50.950 18:55:43 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:04:50.950 18:55:43 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:0b:00.0' 00:04:50.950 18:55:43 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:50.950 18:55:43 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:0b:00.0' 00:04:50.950 18:55:43 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.950 18:55:43 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:52.849 0000:0b:00.0 (8086 0a54): Skipping denied controller at 0000:0b:00.0 00:04:52.849 18:55:44 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:0b:00.0 00:04:52.849 18:55:44 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:52.849 18:55:44 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:52.849 18:55:44 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:0b:00.0 ]] 00:04:52.849 18:55:44 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:0b:00.0/driver 00:04:52.850 18:55:44 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:52.850 18:55:44 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:52.850 18:55:44 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:52.850 18:55:44 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:52.850 18:55:44 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:55.384 00:04:55.384 real 0m4.277s 00:04:55.384 user 0m1.286s 00:04:55.384 sys 0m2.100s 00:04:55.384 18:55:47 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.384 18:55:47 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:55.384 ************************************ 00:04:55.384 END TEST denied 00:04:55.384 ************************************ 00:04:55.384 18:55:47 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:55.384 18:55:47 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.384 18:55:47 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.384 18:55:47 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:55.384 ************************************ 00:04:55.384 START TEST allowed 00:04:55.384 ************************************ 00:04:55.384 18:55:47 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:04:55.384 18:55:47 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:0b:00.0 00:04:55.384 18:55:47 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:55.384 18:55:47 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:0b:00.0 .*: nvme -> .*' 00:04:55.384 18:55:47 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.384 18:55:47 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:57.918 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:57.918 18:55:50 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:57.918 18:55:50 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:57.918 18:55:50 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:57.918 18:55:50 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:57.918 18:55:50 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:59.820 00:04:59.820 real 0m4.323s 00:04:59.820 user 0m1.182s 00:04:59.820 sys 0m2.013s 00:04:59.820 18:55:51 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.820 18:55:51 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:59.820 ************************************ 00:04:59.820 END TEST allowed 00:04:59.820 ************************************ 00:04:59.820 00:04:59.820 real 0m11.813s 00:04:59.820 user 0m3.788s 00:04:59.820 sys 0m6.084s 00:04:59.820 18:55:51 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.820 18:55:51 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:59.820 ************************************ 00:04:59.820 END TEST acl 00:04:59.820 ************************************ 00:04:59.820 18:55:51 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:59.820 18:55:51 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.820 18:55:51 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.820 18:55:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:59.820 ************************************ 00:04:59.820 START TEST hugepages 00:04:59.820 ************************************ 00:04:59.820 18:55:51 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:59.820 * Looking for test storage... 00:04:59.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:59.820 18:55:51 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:59.820 18:55:51 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:59.820 18:55:51 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:59.820 18:55:51 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:59.820 18:55:51 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:59.820 18:55:51 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:59.820 18:55:51 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:59.820 18:55:51 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:59.820 18:55:51 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:59.820 18:55:51 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:59.820 18:55:51 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.820 18:55:51 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.820 18:55:51 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.820 18:55:51 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.820 18:55:51 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.820 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.820 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 38972068 kB' 'MemAvailable: 42894964 kB' 'Buffers: 2704 kB' 'Cached: 14659548 kB' 'SwapCached: 0 kB' 'Active: 11525364 kB' 'Inactive: 3694348 kB' 'Active(anon): 11085584 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560940 kB' 'Mapped: 195284 kB' 'Shmem: 10528124 kB' 'KReclaimable: 433204 kB' 'Slab: 827560 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 394356 kB' 'KernelStack: 12880 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 12226304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198680 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.821 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:59.822 18:55:51 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:59.822 18:55:52 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:59.822 18:55:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:59.822 18:55:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:59.822 18:55:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:59.822 18:55:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:59.822 18:55:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:59.822 18:55:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:59.823 18:55:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:59.823 18:55:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:59.823 18:55:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:59.823 18:55:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:59.823 18:55:52 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:59.823 18:55:52 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:59.823 18:55:52 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:59.823 18:55:52 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.823 18:55:52 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.823 18:55:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:59.823 ************************************ 00:04:59.823 START TEST default_setup 00:04:59.823 ************************************ 00:04:59.823 18:55:52 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:59.823 18:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:59.823 18:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:59.823 18:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:59.823 18:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:59.823 18:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:59.823 18:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:59.823 18:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:59.823 18:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:59.823 18:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:59.823 18:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:59.823 18:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:59.823 18:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:59.823 18:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:59.823 18:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:59.823 18:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:59.823 18:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:59.823 18:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:59.823 18:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:59.823 18:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:59.823 18:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:59.823 18:55:52 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.823 18:55:52 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:01.199 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:01.199 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:01.199 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:01.199 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:01.199 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:01.199 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:01.199 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:01.199 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:01.199 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:01.199 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:01.199 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:01.199 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:01.199 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:01.199 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:01.199 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:01.199 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:02.137 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41102832 kB' 'MemAvailable: 45025728 kB' 'Buffers: 2704 kB' 'Cached: 14659648 kB' 'SwapCached: 0 kB' 'Active: 11542616 kB' 'Inactive: 3694348 kB' 'Active(anon): 11102836 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577832 kB' 'Mapped: 195388 kB' 'Shmem: 10528224 kB' 'KReclaimable: 433204 kB' 'Slab: 827304 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 394100 kB' 'KernelStack: 12832 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12243356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198712 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.137 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.138 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41102832 kB' 'MemAvailable: 45025728 kB' 'Buffers: 2704 kB' 'Cached: 14659652 kB' 'SwapCached: 0 kB' 'Active: 11542632 kB' 'Inactive: 3694348 kB' 'Active(anon): 11102852 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577880 kB' 'Mapped: 195388 kB' 'Shmem: 10528228 kB' 'KReclaimable: 433204 kB' 'Slab: 827304 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 394100 kB' 'KernelStack: 12816 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12243376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198680 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.139 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:02.140 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41103136 kB' 'MemAvailable: 45026032 kB' 'Buffers: 2704 kB' 'Cached: 14659668 kB' 'SwapCached: 0 kB' 'Active: 11542524 kB' 'Inactive: 3694348 kB' 'Active(anon): 11102744 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577768 kB' 'Mapped: 195312 kB' 'Shmem: 10528244 kB' 'KReclaimable: 433204 kB' 'Slab: 827320 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 394116 kB' 'KernelStack: 12832 kB' 'PageTables: 8156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12243396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198680 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.141 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:02.143 nr_hugepages=1024 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:02.143 resv_hugepages=0 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:02.143 surplus_hugepages=0 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:02.143 anon_hugepages=0 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41103480 kB' 'MemAvailable: 45026376 kB' 'Buffers: 2704 kB' 'Cached: 14659692 kB' 'SwapCached: 0 kB' 'Active: 11542548 kB' 'Inactive: 3694348 kB' 'Active(anon): 11102768 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577768 kB' 'Mapped: 195312 kB' 'Shmem: 10528268 kB' 'KReclaimable: 433204 kB' 'Slab: 827316 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 394112 kB' 'KernelStack: 12832 kB' 'PageTables: 8156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12243420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198696 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 18446128 kB' 'MemUsed: 14383756 kB' 'SwapCached: 0 kB' 'Active: 8014860 kB' 'Inactive: 3338952 kB' 'Active(anon): 7659096 kB' 'Inactive(anon): 0 kB' 'Active(file): 355764 kB' 'Inactive(file): 3338952 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11059388 kB' 'Mapped: 116436 kB' 'AnonPages: 297548 kB' 'Shmem: 7364672 kB' 'KernelStack: 8360 kB' 'PageTables: 5072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 154828 kB' 'Slab: 333348 kB' 'SReclaimable: 154828 kB' 'SUnreclaim: 178520 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.405 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:02.406 node0=1024 expecting 1024 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:02.406 00:05:02.406 real 0m2.581s 00:05:02.406 user 0m0.687s 00:05:02.406 sys 0m0.940s 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.406 18:55:54 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:02.406 ************************************ 00:05:02.406 END TEST default_setup 00:05:02.406 ************************************ 00:05:02.406 18:55:54 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:02.406 18:55:54 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.406 18:55:54 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.406 18:55:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:02.406 ************************************ 00:05:02.406 START TEST per_node_1G_alloc 00:05:02.406 ************************************ 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.406 18:55:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:03.783 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:03.783 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:03.783 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:03.783 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:03.783 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:03.783 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:03.783 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:03.783 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:03.783 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:03.783 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:03.783 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:03.783 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:03.783 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:03.783 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:03.783 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:03.783 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:03.783 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:03.783 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:03.783 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:03.783 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:03.783 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:03.783 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:03.783 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:03.783 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:03.783 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:03.783 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.783 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:03.783 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.783 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.783 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.783 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.783 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41069148 kB' 'MemAvailable: 44992044 kB' 'Buffers: 2704 kB' 'Cached: 14659764 kB' 'SwapCached: 0 kB' 'Active: 11542656 kB' 'Inactive: 3694348 kB' 'Active(anon): 11102876 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577852 kB' 'Mapped: 195772 kB' 'Shmem: 10528340 kB' 'KReclaimable: 433204 kB' 'Slab: 827380 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 394176 kB' 'KernelStack: 12816 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12243744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198824 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.784 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.049 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41069420 kB' 'MemAvailable: 44992316 kB' 'Buffers: 2704 kB' 'Cached: 14659768 kB' 'SwapCached: 0 kB' 'Active: 11542876 kB' 'Inactive: 3694348 kB' 'Active(anon): 11103096 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578084 kB' 'Mapped: 195392 kB' 'Shmem: 10528344 kB' 'KReclaimable: 433204 kB' 'Slab: 827380 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 394176 kB' 'KernelStack: 12832 kB' 'PageTables: 8172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12243764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198792 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.050 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.051 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41069204 kB' 'MemAvailable: 44992100 kB' 'Buffers: 2704 kB' 'Cached: 14659784 kB' 'SwapCached: 0 kB' 'Active: 11542784 kB' 'Inactive: 3694348 kB' 'Active(anon): 11103004 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577940 kB' 'Mapped: 195316 kB' 'Shmem: 10528360 kB' 'KReclaimable: 433204 kB' 'Slab: 827396 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 394192 kB' 'KernelStack: 12848 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12243784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198792 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.052 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.053 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.054 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:04.055 nr_hugepages=1024 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:04.055 resv_hugepages=0 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:04.055 surplus_hugepages=0 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:04.055 anon_hugepages=0 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41068952 kB' 'MemAvailable: 44991848 kB' 'Buffers: 2704 kB' 'Cached: 14659808 kB' 'SwapCached: 0 kB' 'Active: 11542828 kB' 'Inactive: 3694348 kB' 'Active(anon): 11103048 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577944 kB' 'Mapped: 195316 kB' 'Shmem: 10528384 kB' 'KReclaimable: 433204 kB' 'Slab: 827396 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 394192 kB' 'KernelStack: 12848 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12243808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198824 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.055 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.056 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 19493656 kB' 'MemUsed: 13336228 kB' 'SwapCached: 0 kB' 'Active: 8015756 kB' 'Inactive: 3338952 kB' 'Active(anon): 7659992 kB' 'Inactive(anon): 0 kB' 'Active(file): 355764 kB' 'Inactive(file): 3338952 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11059400 kB' 'Mapped: 116440 kB' 'AnonPages: 298488 kB' 'Shmem: 7364684 kB' 'KernelStack: 8392 kB' 'PageTables: 5176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 154828 kB' 'Slab: 333404 kB' 'SReclaimable: 154828 kB' 'SUnreclaim: 178576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.057 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 21575176 kB' 'MemUsed: 6136648 kB' 'SwapCached: 0 kB' 'Active: 3527124 kB' 'Inactive: 355396 kB' 'Active(anon): 3443108 kB' 'Inactive(anon): 0 kB' 'Active(file): 84016 kB' 'Inactive(file): 355396 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3603156 kB' 'Mapped: 78876 kB' 'AnonPages: 279460 kB' 'Shmem: 3163744 kB' 'KernelStack: 4456 kB' 'PageTables: 2992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 278376 kB' 'Slab: 493992 kB' 'SReclaimable: 278376 kB' 'SUnreclaim: 215616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.058 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.059 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:04.060 node0=512 expecting 512 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:04.060 node1=512 expecting 512 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:04.060 00:05:04.060 real 0m1.720s 00:05:04.060 user 0m0.664s 00:05:04.060 sys 0m1.027s 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.060 18:55:56 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:04.060 ************************************ 00:05:04.060 END TEST per_node_1G_alloc 00:05:04.060 ************************************ 00:05:04.060 18:55:56 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:04.060 18:55:56 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.060 18:55:56 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.060 18:55:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:04.060 ************************************ 00:05:04.060 START TEST even_2G_alloc 00:05:04.060 ************************************ 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.060 18:55:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:05.435 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:05.435 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:05.435 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:05.435 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:05.435 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:05.435 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:05.435 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:05.435 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:05.435 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:05.435 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:05.435 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:05.435 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:05.435 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:05.435 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:05.435 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:05.435 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:05.435 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:05.698 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:05.698 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:05.698 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:05.698 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:05.698 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:05.698 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:05.698 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:05.698 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:05.698 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:05.698 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:05.698 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:05.698 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.698 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.698 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.698 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.698 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.698 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.698 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.698 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.698 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.698 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41069032 kB' 'MemAvailable: 44991928 kB' 'Buffers: 2704 kB' 'Cached: 14659900 kB' 'SwapCached: 0 kB' 'Active: 11543476 kB' 'Inactive: 3694348 kB' 'Active(anon): 11103696 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578428 kB' 'Mapped: 195368 kB' 'Shmem: 10528476 kB' 'KReclaimable: 433204 kB' 'Slab: 827340 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 394136 kB' 'KernelStack: 12880 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12244168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198920 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.699 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41068988 kB' 'MemAvailable: 44991884 kB' 'Buffers: 2704 kB' 'Cached: 14659904 kB' 'SwapCached: 0 kB' 'Active: 11543536 kB' 'Inactive: 3694348 kB' 'Active(anon): 11103756 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578488 kB' 'Mapped: 195412 kB' 'Shmem: 10528480 kB' 'KReclaimable: 433204 kB' 'Slab: 827372 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 394168 kB' 'KernelStack: 12864 kB' 'PageTables: 8176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12244188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198888 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.700 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41069888 kB' 'MemAvailable: 44992784 kB' 'Buffers: 2704 kB' 'Cached: 14659920 kB' 'SwapCached: 0 kB' 'Active: 11543108 kB' 'Inactive: 3694348 kB' 'Active(anon): 11103328 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578036 kB' 'Mapped: 195336 kB' 'Shmem: 10528496 kB' 'KReclaimable: 433204 kB' 'Slab: 827396 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 394192 kB' 'KernelStack: 12864 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12244208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198888 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.701 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.702 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:05.703 nr_hugepages=1024 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:05.703 resv_hugepages=0 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:05.703 surplus_hugepages=0 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:05.703 anon_hugepages=0 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41069888 kB' 'MemAvailable: 44992784 kB' 'Buffers: 2704 kB' 'Cached: 14659944 kB' 'SwapCached: 0 kB' 'Active: 11543124 kB' 'Inactive: 3694348 kB' 'Active(anon): 11103344 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578032 kB' 'Mapped: 195336 kB' 'Shmem: 10528520 kB' 'KReclaimable: 433204 kB' 'Slab: 827396 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 394192 kB' 'KernelStack: 12864 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12244232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198888 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.703 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.704 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 19491728 kB' 'MemUsed: 13338156 kB' 'SwapCached: 0 kB' 'Active: 8015676 kB' 'Inactive: 3338952 kB' 'Active(anon): 7659912 kB' 'Inactive(anon): 0 kB' 'Active(file): 355764 kB' 'Inactive(file): 3338952 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11059404 kB' 'Mapped: 116460 kB' 'AnonPages: 298324 kB' 'Shmem: 7364688 kB' 'KernelStack: 8392 kB' 'PageTables: 5128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 154828 kB' 'Slab: 333508 kB' 'SReclaimable: 154828 kB' 'SUnreclaim: 178680 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:05.705 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 21579528 kB' 'MemUsed: 6132296 kB' 'SwapCached: 0 kB' 'Active: 3527548 kB' 'Inactive: 355396 kB' 'Active(anon): 3443532 kB' 'Inactive(anon): 0 kB' 'Active(file): 84016 kB' 'Inactive(file): 355396 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3603284 kB' 'Mapped: 78876 kB' 'AnonPages: 279704 kB' 'Shmem: 3163872 kB' 'KernelStack: 4472 kB' 'PageTables: 3040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 278376 kB' 'Slab: 493888 kB' 'SReclaimable: 278376 kB' 'SUnreclaim: 215512 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.706 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:05.707 node0=512 expecting 512 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:05.707 node1=512 expecting 512 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:05.707 00:05:05.707 real 0m1.657s 00:05:05.707 user 0m0.726s 00:05:05.707 sys 0m0.902s 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.707 18:55:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:05.707 ************************************ 00:05:05.707 END TEST even_2G_alloc 00:05:05.707 ************************************ 00:05:05.707 18:55:58 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:05.707 18:55:58 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.707 18:55:58 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.707 18:55:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:05.707 ************************************ 00:05:05.707 START TEST odd_alloc 00:05:05.707 ************************************ 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.707 18:55:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:07.087 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:07.087 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:07.087 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:07.087 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:07.087 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:07.087 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:07.087 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:07.087 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:07.087 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:07.087 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:07.087 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:07.087 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:07.087 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:07.087 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:07.087 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:07.087 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:07.087 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41083864 kB' 'MemAvailable: 45006760 kB' 'Buffers: 2704 kB' 'Cached: 14660032 kB' 'SwapCached: 0 kB' 'Active: 11545960 kB' 'Inactive: 3694348 kB' 'Active(anon): 11106180 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580808 kB' 'Mapped: 195232 kB' 'Shmem: 10528608 kB' 'KReclaimable: 433204 kB' 'Slab: 827180 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 393976 kB' 'KernelStack: 12752 kB' 'PageTables: 7732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 12235172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198764 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.350 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.351 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41089048 kB' 'MemAvailable: 45011944 kB' 'Buffers: 2704 kB' 'Cached: 14660036 kB' 'SwapCached: 0 kB' 'Active: 11539704 kB' 'Inactive: 3694348 kB' 'Active(anon): 11099924 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574580 kB' 'Mapped: 194728 kB' 'Shmem: 10528612 kB' 'KReclaimable: 433204 kB' 'Slab: 827188 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 393984 kB' 'KernelStack: 12800 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 12229072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198760 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.352 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.353 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41090684 kB' 'MemAvailable: 45013580 kB' 'Buffers: 2704 kB' 'Cached: 14660052 kB' 'SwapCached: 0 kB' 'Active: 11539664 kB' 'Inactive: 3694348 kB' 'Active(anon): 11099884 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574536 kB' 'Mapped: 194304 kB' 'Shmem: 10528628 kB' 'KReclaimable: 433204 kB' 'Slab: 827192 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 393988 kB' 'KernelStack: 12800 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 12229092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198760 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.354 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:07.355 nr_hugepages=1025 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:07.355 resv_hugepages=0 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:07.355 surplus_hugepages=0 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:07.355 anon_hugepages=0 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.355 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41090684 kB' 'MemAvailable: 45013580 kB' 'Buffers: 2704 kB' 'Cached: 14660072 kB' 'SwapCached: 0 kB' 'Active: 11539664 kB' 'Inactive: 3694348 kB' 'Active(anon): 11099884 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574532 kB' 'Mapped: 194304 kB' 'Shmem: 10528648 kB' 'KReclaimable: 433204 kB' 'Slab: 827192 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 393988 kB' 'KernelStack: 12800 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 12229112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198760 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.356 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 19516824 kB' 'MemUsed: 13313060 kB' 'SwapCached: 0 kB' 'Active: 8013188 kB' 'Inactive: 3338952 kB' 'Active(anon): 7657424 kB' 'Inactive(anon): 0 kB' 'Active(file): 355764 kB' 'Inactive(file): 3338952 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11059412 kB' 'Mapped: 115888 kB' 'AnonPages: 295948 kB' 'Shmem: 7364696 kB' 'KernelStack: 8424 kB' 'PageTables: 5152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 154828 kB' 'Slab: 333444 kB' 'SReclaimable: 154828 kB' 'SUnreclaim: 178616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.357 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 21580656 kB' 'MemUsed: 6131168 kB' 'SwapCached: 0 kB' 'Active: 3526776 kB' 'Inactive: 355396 kB' 'Active(anon): 3442760 kB' 'Inactive(anon): 0 kB' 'Active(file): 84016 kB' 'Inactive(file): 355396 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3603404 kB' 'Mapped: 78416 kB' 'AnonPages: 278848 kB' 'Shmem: 3163992 kB' 'KernelStack: 4360 kB' 'PageTables: 2656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 278376 kB' 'Slab: 493748 kB' 'SReclaimable: 278376 kB' 'SUnreclaim: 215372 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:07.620 node0=512 expecting 513 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:07.620 node1=513 expecting 512 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:07.620 00:05:07.620 real 0m1.706s 00:05:07.620 user 0m0.724s 00:05:07.620 sys 0m0.948s 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.620 18:55:59 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:07.620 ************************************ 00:05:07.620 END TEST odd_alloc 00:05:07.620 ************************************ 00:05:07.620 18:55:59 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:07.620 18:55:59 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.620 18:55:59 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.620 18:55:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:07.620 ************************************ 00:05:07.620 START TEST custom_alloc 00:05:07.620 ************************************ 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:07.620 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.621 18:55:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:09.002 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:09.002 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:09.002 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:09.002 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:09.002 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:09.002 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:09.002 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:09.002 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:09.002 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:09.002 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:09.002 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:09.002 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:09.002 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:09.002 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:09.002 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:09.002 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:09.002 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40048040 kB' 'MemAvailable: 43970936 kB' 'Buffers: 2704 kB' 'Cached: 14660164 kB' 'SwapCached: 0 kB' 'Active: 11540008 kB' 'Inactive: 3694348 kB' 'Active(anon): 11100228 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574668 kB' 'Mapped: 194348 kB' 'Shmem: 10528740 kB' 'KReclaimable: 433204 kB' 'Slab: 827192 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 393988 kB' 'KernelStack: 12800 kB' 'PageTables: 7780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 12229184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198888 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.002 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.003 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40051344 kB' 'MemAvailable: 43974240 kB' 'Buffers: 2704 kB' 'Cached: 14660168 kB' 'SwapCached: 0 kB' 'Active: 11540340 kB' 'Inactive: 3694348 kB' 'Active(anon): 11100560 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575008 kB' 'Mapped: 194316 kB' 'Shmem: 10528744 kB' 'KReclaimable: 433204 kB' 'Slab: 827188 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 393984 kB' 'KernelStack: 12832 kB' 'PageTables: 7844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 12229204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198856 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.004 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.005 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40052308 kB' 'MemAvailable: 43975204 kB' 'Buffers: 2704 kB' 'Cached: 14660184 kB' 'SwapCached: 0 kB' 'Active: 11540324 kB' 'Inactive: 3694348 kB' 'Active(anon): 11100544 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574976 kB' 'Mapped: 194316 kB' 'Shmem: 10528760 kB' 'KReclaimable: 433204 kB' 'Slab: 827284 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 394080 kB' 'KernelStack: 12832 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 12229224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198856 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.006 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.007 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:09.008 nr_hugepages=1536 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:09.008 resv_hugepages=0 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:09.008 surplus_hugepages=0 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:09.008 anon_hugepages=0 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40052308 kB' 'MemAvailable: 43975204 kB' 'Buffers: 2704 kB' 'Cached: 14660184 kB' 'SwapCached: 0 kB' 'Active: 11540048 kB' 'Inactive: 3694348 kB' 'Active(anon): 11100268 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574696 kB' 'Mapped: 194316 kB' 'Shmem: 10528760 kB' 'KReclaimable: 433204 kB' 'Slab: 827284 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 394080 kB' 'KernelStack: 12832 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 12229244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198856 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.008 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.009 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 19510236 kB' 'MemUsed: 13319648 kB' 'SwapCached: 0 kB' 'Active: 8013172 kB' 'Inactive: 3338952 kB' 'Active(anon): 7657408 kB' 'Inactive(anon): 0 kB' 'Active(file): 355764 kB' 'Inactive(file): 3338952 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11059424 kB' 'Mapped: 115900 kB' 'AnonPages: 295808 kB' 'Shmem: 7364708 kB' 'KernelStack: 8424 kB' 'PageTables: 5068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 154828 kB' 'Slab: 333564 kB' 'SReclaimable: 154828 kB' 'SUnreclaim: 178736 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.010 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:09.011 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:09.300 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:09.300 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.300 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:09.300 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:09.300 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.300 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.300 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:09.300 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:09.300 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.300 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.300 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.300 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.300 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 20542072 kB' 'MemUsed: 7169752 kB' 'SwapCached: 0 kB' 'Active: 3527228 kB' 'Inactive: 355396 kB' 'Active(anon): 3443212 kB' 'Inactive(anon): 0 kB' 'Active(file): 84016 kB' 'Inactive(file): 355396 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3603528 kB' 'Mapped: 78416 kB' 'AnonPages: 279180 kB' 'Shmem: 3164116 kB' 'KernelStack: 4408 kB' 'PageTables: 2796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 278376 kB' 'Slab: 493720 kB' 'SReclaimable: 278376 kB' 'SUnreclaim: 215344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:09.300 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.300 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.300 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.300 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.300 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.300 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.301 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:09.302 node0=512 expecting 512 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:09.302 node1=1024 expecting 1024 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:09.302 00:05:09.302 real 0m1.588s 00:05:09.302 user 0m0.691s 00:05:09.302 sys 0m0.864s 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.302 18:56:01 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:09.302 ************************************ 00:05:09.302 END TEST custom_alloc 00:05:09.302 ************************************ 00:05:09.302 18:56:01 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:09.302 18:56:01 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.302 18:56:01 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.302 18:56:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:09.302 ************************************ 00:05:09.302 START TEST no_shrink_alloc 00:05:09.302 ************************************ 00:05:09.302 18:56:01 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:05:09.302 18:56:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:09.302 18:56:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:09.302 18:56:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:09.302 18:56:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:09.302 18:56:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:09.302 18:56:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:09.302 18:56:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:09.302 18:56:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:09.302 18:56:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:09.302 18:56:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:09.302 18:56:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:09.302 18:56:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:09.302 18:56:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:09.302 18:56:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:09.302 18:56:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:09.302 18:56:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:09.302 18:56:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:09.302 18:56:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:09.302 18:56:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:09.302 18:56:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:09.302 18:56:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.302 18:56:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:10.680 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:10.680 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:10.680 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:10.680 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:10.680 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:10.680 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:10.680 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:10.680 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:10.680 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:10.680 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:10.680 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:10.680 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:10.680 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:10.680 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:10.680 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:10.680 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:10.681 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40991292 kB' 'MemAvailable: 44914188 kB' 'Buffers: 2704 kB' 'Cached: 14660300 kB' 'SwapCached: 0 kB' 'Active: 11541024 kB' 'Inactive: 3694348 kB' 'Active(anon): 11101244 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575616 kB' 'Mapped: 194440 kB' 'Shmem: 10528876 kB' 'KReclaimable: 433204 kB' 'Slab: 827232 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 394028 kB' 'KernelStack: 12848 kB' 'PageTables: 7936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12229868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198776 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.681 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.682 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40991996 kB' 'MemAvailable: 44914892 kB' 'Buffers: 2704 kB' 'Cached: 14660300 kB' 'SwapCached: 0 kB' 'Active: 11540276 kB' 'Inactive: 3694348 kB' 'Active(anon): 11100496 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574868 kB' 'Mapped: 194328 kB' 'Shmem: 10528876 kB' 'KReclaimable: 433204 kB' 'Slab: 827228 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 394024 kB' 'KernelStack: 12832 kB' 'PageTables: 7884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12229884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198744 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.683 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.684 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40992288 kB' 'MemAvailable: 44915184 kB' 'Buffers: 2704 kB' 'Cached: 14660320 kB' 'SwapCached: 0 kB' 'Active: 11540880 kB' 'Inactive: 3694348 kB' 'Active(anon): 11101100 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575892 kB' 'Mapped: 194764 kB' 'Shmem: 10528896 kB' 'KReclaimable: 433204 kB' 'Slab: 827228 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 394024 kB' 'KernelStack: 12816 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12231264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198744 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.685 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.686 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:10.687 nr_hugepages=1024 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:10.687 resv_hugepages=0 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:10.687 surplus_hugepages=0 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:10.687 anon_hugepages=0 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40992540 kB' 'MemAvailable: 44915436 kB' 'Buffers: 2704 kB' 'Cached: 14660340 kB' 'SwapCached: 0 kB' 'Active: 11544348 kB' 'Inactive: 3694348 kB' 'Active(anon): 11104568 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578872 kB' 'Mapped: 194764 kB' 'Shmem: 10528916 kB' 'KReclaimable: 433204 kB' 'Slab: 827228 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 394024 kB' 'KernelStack: 12816 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12234452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198728 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.687 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.688 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 18444344 kB' 'MemUsed: 14385540 kB' 'SwapCached: 0 kB' 'Active: 8019268 kB' 'Inactive: 3338952 kB' 'Active(anon): 7663504 kB' 'Inactive(anon): 0 kB' 'Active(file): 355764 kB' 'Inactive(file): 3338952 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11059504 kB' 'Mapped: 116064 kB' 'AnonPages: 301904 kB' 'Shmem: 7364788 kB' 'KernelStack: 8424 kB' 'PageTables: 5200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 154828 kB' 'Slab: 333592 kB' 'SReclaimable: 154828 kB' 'SUnreclaim: 178764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.689 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.690 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.691 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.691 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.691 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.691 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.691 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.691 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.691 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.691 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.691 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.691 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.691 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:10.691 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:10.691 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:10.691 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:10.691 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:10.691 node0=1024 expecting 1024 00:05:10.691 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:10.691 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:10.691 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:10.691 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:10.691 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.691 18:56:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:12.064 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:12.064 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:12.064 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:12.064 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:12.064 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:12.064 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:12.064 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:12.064 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:12.064 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:12.064 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:12.064 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:12.064 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:12.064 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:12.064 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:12.064 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:12.064 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:12.064 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:12.328 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40969728 kB' 'MemAvailable: 44892624 kB' 'Buffers: 2704 kB' 'Cached: 14660404 kB' 'SwapCached: 0 kB' 'Active: 11542652 kB' 'Inactive: 3694348 kB' 'Active(anon): 11102872 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577008 kB' 'Mapped: 194448 kB' 'Shmem: 10528980 kB' 'KReclaimable: 433204 kB' 'Slab: 827052 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 393848 kB' 'KernelStack: 13216 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12232504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199080 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.328 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.329 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40976004 kB' 'MemAvailable: 44898900 kB' 'Buffers: 2704 kB' 'Cached: 14660408 kB' 'SwapCached: 0 kB' 'Active: 11542336 kB' 'Inactive: 3694348 kB' 'Active(anon): 11102556 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576704 kB' 'Mapped: 194380 kB' 'Shmem: 10528984 kB' 'KReclaimable: 433204 kB' 'Slab: 827052 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 393848 kB' 'KernelStack: 13248 kB' 'PageTables: 9176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12232520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199048 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.330 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.331 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40981152 kB' 'MemAvailable: 44904048 kB' 'Buffers: 2704 kB' 'Cached: 14660428 kB' 'SwapCached: 0 kB' 'Active: 11541616 kB' 'Inactive: 3694348 kB' 'Active(anon): 11101836 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575960 kB' 'Mapped: 194320 kB' 'Shmem: 10529004 kB' 'KReclaimable: 433204 kB' 'Slab: 827164 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 393960 kB' 'KernelStack: 12832 kB' 'PageTables: 8000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12230184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198824 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.332 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.333 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:12.334 nr_hugepages=1024 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:12.334 resv_hugepages=0 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:12.334 surplus_hugepages=0 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:12.334 anon_hugepages=0 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40982100 kB' 'MemAvailable: 44904996 kB' 'Buffers: 2704 kB' 'Cached: 14660428 kB' 'SwapCached: 0 kB' 'Active: 11540988 kB' 'Inactive: 3694348 kB' 'Active(anon): 11101208 kB' 'Inactive(anon): 0 kB' 'Active(file): 439780 kB' 'Inactive(file): 3694348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575412 kB' 'Mapped: 194320 kB' 'Shmem: 10529004 kB' 'KReclaimable: 433204 kB' 'Slab: 827164 kB' 'SReclaimable: 433204 kB' 'SUnreclaim: 393960 kB' 'KernelStack: 12704 kB' 'PageTables: 7680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12230204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198824 kB' 'VmallocChunk: 0 kB' 'Percpu: 44352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1893980 kB' 'DirectMap2M: 18997248 kB' 'DirectMap1G: 48234496 kB' 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.334 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.335 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 18427676 kB' 'MemUsed: 14402208 kB' 'SwapCached: 0 kB' 'Active: 8013944 kB' 'Inactive: 3338952 kB' 'Active(anon): 7658180 kB' 'Inactive(anon): 0 kB' 'Active(file): 355764 kB' 'Inactive(file): 3338952 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11059612 kB' 'Mapped: 115920 kB' 'AnonPages: 296452 kB' 'Shmem: 7364896 kB' 'KernelStack: 8408 kB' 'PageTables: 5128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 154828 kB' 'Slab: 333520 kB' 'SReclaimable: 154828 kB' 'SUnreclaim: 178692 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.336 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:12.337 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:12.338 node0=1024 expecting 1024 00:05:12.338 18:56:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:12.338 00:05:12.338 real 0m3.182s 00:05:12.338 user 0m1.308s 00:05:12.338 sys 0m1.811s 00:05:12.338 18:56:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.338 18:56:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:12.338 ************************************ 00:05:12.338 END TEST no_shrink_alloc 00:05:12.338 ************************************ 00:05:12.338 18:56:04 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:12.338 18:56:04 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:12.338 18:56:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:12.338 18:56:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:12.338 18:56:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:12.338 18:56:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:12.338 18:56:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:12.338 18:56:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:12.338 18:56:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:12.338 18:56:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:12.338 18:56:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:12.338 18:56:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:12.338 18:56:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:12.338 18:56:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:12.338 00:05:12.338 real 0m12.841s 00:05:12.338 user 0m4.972s 00:05:12.338 sys 0m6.748s 00:05:12.338 18:56:04 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.338 18:56:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:12.338 ************************************ 00:05:12.338 END TEST hugepages 00:05:12.338 ************************************ 00:05:12.338 18:56:04 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:12.338 18:56:04 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.338 18:56:04 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.338 18:56:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:12.597 ************************************ 00:05:12.597 START TEST driver 00:05:12.597 ************************************ 00:05:12.597 18:56:04 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:12.597 * Looking for test storage... 00:05:12.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:12.597 18:56:04 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:12.597 18:56:04 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:12.597 18:56:04 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:15.130 18:56:07 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:15.130 18:56:07 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.130 18:56:07 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.130 18:56:07 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:15.130 ************************************ 00:05:15.130 START TEST guess_driver 00:05:15.130 ************************************ 00:05:15.130 18:56:07 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:05:15.130 18:56:07 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:15.130 18:56:07 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:15.130 18:56:07 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:15.130 18:56:07 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:15.130 18:56:07 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:15.130 18:56:07 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:15.130 18:56:07 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:15.130 18:56:07 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:15.130 18:56:07 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:15.130 18:56:07 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 189 > 0 )) 00:05:15.130 18:56:07 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:15.131 18:56:07 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:15.131 18:56:07 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:15.131 18:56:07 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:15.131 18:56:07 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:15.131 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:15.131 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:15.131 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:15.131 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:15.131 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:15.131 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:15.131 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:15.131 18:56:07 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:15.131 18:56:07 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:15.131 18:56:07 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:15.131 18:56:07 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:15.131 18:56:07 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:15.131 Looking for driver=vfio-pci 00:05:15.131 18:56:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:15.131 18:56:07 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:15.131 18:56:07 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.131 18:56:07 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.510 18:56:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.444 18:56:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.444 18:56:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.444 18:56:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.702 18:56:10 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:17.702 18:56:10 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:17.702 18:56:10 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:17.702 18:56:10 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:20.991 00:05:20.991 real 0m5.313s 00:05:20.991 user 0m1.282s 00:05:20.991 sys 0m2.073s 00:05:20.991 18:56:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.991 18:56:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:20.991 ************************************ 00:05:20.991 END TEST guess_driver 00:05:20.991 ************************************ 00:05:20.991 00:05:20.991 real 0m7.977s 00:05:20.991 user 0m1.890s 00:05:20.991 sys 0m3.163s 00:05:20.991 18:56:12 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.991 18:56:12 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:20.991 ************************************ 00:05:20.991 END TEST driver 00:05:20.991 ************************************ 00:05:20.991 18:56:12 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:20.991 18:56:12 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.991 18:56:12 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.991 18:56:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:20.991 ************************************ 00:05:20.991 START TEST devices 00:05:20.991 ************************************ 00:05:20.991 18:56:12 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:20.991 * Looking for test storage... 00:05:20.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:20.991 18:56:12 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:20.991 18:56:12 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:20.991 18:56:12 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:20.991 18:56:12 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:22.367 18:56:14 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:22.367 18:56:14 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:22.367 18:56:14 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:22.367 18:56:14 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:22.367 18:56:14 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:22.367 18:56:14 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:22.367 18:56:14 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:22.367 18:56:14 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:22.367 18:56:14 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:22.367 18:56:14 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:22.367 18:56:14 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:22.367 18:56:14 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:22.367 18:56:14 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:22.367 18:56:14 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:22.367 18:56:14 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:22.367 18:56:14 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:22.367 18:56:14 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:22.367 18:56:14 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:0b:00.0 00:05:22.367 18:56:14 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:05:22.367 18:56:14 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:22.367 18:56:14 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:22.367 18:56:14 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:22.367 No valid GPT data, bailing 00:05:22.367 18:56:14 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:22.367 18:56:14 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:22.367 18:56:14 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:22.367 18:56:14 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:22.367 18:56:14 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:22.367 18:56:14 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:22.367 18:56:14 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:22.367 18:56:14 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:22.367 18:56:14 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:22.367 18:56:14 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:0b:00.0 00:05:22.367 18:56:14 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:22.367 18:56:14 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:22.367 18:56:14 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:22.368 18:56:14 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.368 18:56:14 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.368 18:56:14 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:22.368 ************************************ 00:05:22.368 START TEST nvme_mount 00:05:22.368 ************************************ 00:05:22.368 18:56:14 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:05:22.368 18:56:14 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:22.368 18:56:14 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:22.368 18:56:14 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:22.368 18:56:14 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:22.368 18:56:14 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:22.368 18:56:14 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:22.368 18:56:14 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:22.368 18:56:14 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:22.368 18:56:14 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:22.368 18:56:14 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:22.368 18:56:14 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:22.368 18:56:14 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:22.368 18:56:14 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:22.368 18:56:14 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:22.368 18:56:14 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:22.368 18:56:14 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:22.368 18:56:14 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:22.368 18:56:14 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:22.368 18:56:14 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:23.305 Creating new GPT entries in memory. 00:05:23.305 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:23.305 other utilities. 00:05:23.305 18:56:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:23.305 18:56:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:23.305 18:56:15 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:23.305 18:56:15 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:23.305 18:56:15 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:24.242 Creating new GPT entries in memory. 00:05:24.242 The operation has completed successfully. 00:05:24.242 18:56:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:24.242 18:56:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:24.242 18:56:16 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 760999 00:05:24.500 18:56:16 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.500 18:56:16 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:24.500 18:56:16 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.500 18:56:16 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:24.500 18:56:16 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:24.500 18:56:16 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.500 18:56:16 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:24.500 18:56:16 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:05:24.500 18:56:16 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:24.500 18:56:16 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.500 18:56:16 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:24.500 18:56:16 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:24.500 18:56:16 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:24.500 18:56:16 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:24.500 18:56:16 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:24.500 18:56:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.500 18:56:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:05:24.500 18:56:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:24.500 18:56:16 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.500 18:56:16 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:25.873 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:25.873 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:26.131 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:26.131 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:26.131 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:26.131 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:26.131 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:26.131 18:56:18 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:26.131 18:56:18 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:26.131 18:56:18 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:26.131 18:56:18 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:26.389 18:56:18 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:26.389 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:26.389 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:05:26.389 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:26.389 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:26.389 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:26.389 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:26.389 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:26.389 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:26.389 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:26.389 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:05:26.389 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.389 18:56:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:26.389 18:56:18 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.389 18:56:18 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:27.323 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.586 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:27.586 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.586 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:27.586 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.586 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:27.586 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.586 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:27.586 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.586 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:27.586 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:27.586 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:27.586 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:27.586 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:27.586 18:56:19 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:27.586 18:56:20 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:0b:00.0 data@nvme0n1 '' '' 00:05:27.586 18:56:20 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:05:27.586 18:56:20 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:27.586 18:56:20 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:27.586 18:56:20 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:27.586 18:56:20 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:27.586 18:56:20 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:27.586 18:56:20 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:27.586 18:56:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.586 18:56:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:05:27.586 18:56:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:27.586 18:56:20 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.586 18:56:20 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:28.960 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:28.961 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:28.961 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:28.961 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:28.961 18:56:21 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:28.961 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:28.961 00:05:28.961 real 0m6.713s 00:05:28.961 user 0m1.609s 00:05:28.961 sys 0m2.680s 00:05:28.961 18:56:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.961 18:56:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:28.961 ************************************ 00:05:28.961 END TEST nvme_mount 00:05:28.961 ************************************ 00:05:28.961 18:56:21 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:28.961 18:56:21 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.961 18:56:21 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.961 18:56:21 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:29.220 ************************************ 00:05:29.220 START TEST dm_mount 00:05:29.220 ************************************ 00:05:29.220 18:56:21 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:05:29.220 18:56:21 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:29.220 18:56:21 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:29.220 18:56:21 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:29.220 18:56:21 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:29.220 18:56:21 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:29.220 18:56:21 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:29.220 18:56:21 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:29.220 18:56:21 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:29.220 18:56:21 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:29.220 18:56:21 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:29.220 18:56:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:29.220 18:56:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:29.220 18:56:21 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:29.220 18:56:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:29.220 18:56:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:29.220 18:56:21 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:29.220 18:56:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:29.220 18:56:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:29.220 18:56:21 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:29.220 18:56:21 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:29.220 18:56:21 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:30.157 Creating new GPT entries in memory. 00:05:30.157 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:30.157 other utilities. 00:05:30.157 18:56:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:30.157 18:56:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:30.157 18:56:22 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:30.157 18:56:22 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:30.157 18:56:22 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:31.154 Creating new GPT entries in memory. 00:05:31.154 The operation has completed successfully. 00:05:31.154 18:56:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:31.154 18:56:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:31.154 18:56:23 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:31.154 18:56:23 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:31.154 18:56:23 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:32.088 The operation has completed successfully. 00:05:32.088 18:56:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:32.088 18:56:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:32.088 18:56:24 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 763691 00:05:32.088 18:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:32.088 18:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:32.088 18:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:32.088 18:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:32.088 18:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:32.088 18:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:32.088 18:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:32.088 18:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:32.088 18:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:32.088 18:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:32.088 18:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:32.088 18:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:32.088 18:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:32.088 18:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:32.088 18:56:24 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:32.088 18:56:24 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:32.088 18:56:24 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:32.088 18:56:24 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:32.345 18:56:24 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:32.345 18:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:0b:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:32.345 18:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:05:32.345 18:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:32.345 18:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:32.345 18:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:32.345 18:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:32.345 18:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:32.345 18:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:32.345 18:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:32.345 18:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.345 18:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:05:32.345 18:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:32.345 18:56:24 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.345 18:56:24 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.720 18:56:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.720 18:56:26 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:33.720 18:56:26 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:33.720 18:56:26 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:33.720 18:56:26 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:33.720 18:56:26 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:33.720 18:56:26 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:33.720 18:56:26 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:0b:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:33.720 18:56:26 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:05:33.720 18:56:26 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:33.720 18:56:26 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:33.720 18:56:26 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:33.720 18:56:26 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:33.720 18:56:26 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:33.720 18:56:26 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:33.720 18:56:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.720 18:56:26 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:05:33.720 18:56:26 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:33.720 18:56:26 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.720 18:56:26 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:35.094 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.094 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.094 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.094 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.094 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.094 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.094 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.094 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.094 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.094 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.094 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.094 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.094 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.094 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.094 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.094 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.094 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:35.095 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:35.095 00:05:35.095 real 0m6.120s 00:05:35.095 user 0m1.135s 00:05:35.095 sys 0m1.827s 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.095 18:56:27 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:35.095 ************************************ 00:05:35.095 END TEST dm_mount 00:05:35.095 ************************************ 00:05:35.353 18:56:27 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:35.353 18:56:27 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:35.353 18:56:27 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:35.353 18:56:27 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:35.353 18:56:27 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:35.353 18:56:27 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:35.353 18:56:27 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:35.611 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:35.611 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:35.611 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:35.611 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:35.611 18:56:27 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:35.611 18:56:27 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:35.611 18:56:27 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:35.611 18:56:27 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:35.611 18:56:27 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:35.611 18:56:27 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:35.611 18:56:27 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:35.611 00:05:35.611 real 0m15.030s 00:05:35.611 user 0m3.484s 00:05:35.611 sys 0m5.731s 00:05:35.611 18:56:27 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.611 18:56:27 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:35.611 ************************************ 00:05:35.611 END TEST devices 00:05:35.611 ************************************ 00:05:35.611 00:05:35.611 real 0m47.903s 00:05:35.611 user 0m14.232s 00:05:35.611 sys 0m21.885s 00:05:35.611 18:56:27 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.611 18:56:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:35.611 ************************************ 00:05:35.611 END TEST setup.sh 00:05:35.611 ************************************ 00:05:35.611 18:56:27 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:36.984 Hugepages 00:05:36.984 node hugesize free / total 00:05:36.984 node0 1048576kB 0 / 0 00:05:36.984 node0 2048kB 2048 / 2048 00:05:36.984 node1 1048576kB 0 / 0 00:05:36.984 node1 2048kB 0 / 0 00:05:36.984 00:05:36.984 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:36.984 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:36.984 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:36.984 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:36.984 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:36.984 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:36.984 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:36.984 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:36.984 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:36.984 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:36.984 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:36.984 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:36.984 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:36.984 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:36.984 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:36.984 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:36.984 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:36.984 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:36.984 18:56:29 -- spdk/autotest.sh@130 -- # uname -s 00:05:36.984 18:56:29 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:36.984 18:56:29 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:36.984 18:56:29 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:38.357 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:38.357 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:38.357 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:38.357 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:38.357 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:38.357 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:38.357 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:38.357 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:38.357 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:38.357 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:38.357 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:38.357 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:38.357 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:38.357 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:38.357 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:38.357 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:39.293 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:05:39.553 18:56:31 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:40.491 18:56:32 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:40.491 18:56:32 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:40.491 18:56:32 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:40.491 18:56:32 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:40.491 18:56:32 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:40.491 18:56:32 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:40.491 18:56:32 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:40.491 18:56:32 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:40.491 18:56:32 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:40.491 18:56:32 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:40.491 18:56:32 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:05:40.491 18:56:32 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:41.866 Waiting for block devices as requested 00:05:41.866 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:41.866 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:41.866 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:41.866 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:42.126 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:42.126 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:42.126 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:42.126 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:42.385 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:05:42.385 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:42.643 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:42.643 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:42.643 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:42.643 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:42.902 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:42.902 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:42.902 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:43.162 18:56:35 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:43.162 18:56:35 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:05:43.162 18:56:35 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:43.162 18:56:35 -- common/autotest_common.sh@1502 -- # grep 0000:0b:00.0/nvme/nvme 00:05:43.162 18:56:35 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:05:43.162 18:56:35 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:05:43.162 18:56:35 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:05:43.162 18:56:35 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:43.162 18:56:35 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:43.162 18:56:35 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:43.162 18:56:35 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:43.162 18:56:35 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:43.162 18:56:35 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:43.162 18:56:35 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:05:43.162 18:56:35 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:43.162 18:56:35 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:43.162 18:56:35 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:43.162 18:56:35 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:43.162 18:56:35 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:43.162 18:56:35 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:43.162 18:56:35 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:43.162 18:56:35 -- common/autotest_common.sh@1557 -- # continue 00:05:43.162 18:56:35 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:43.162 18:56:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:43.162 18:56:35 -- common/autotest_common.sh@10 -- # set +x 00:05:43.162 18:56:35 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:43.162 18:56:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:43.162 18:56:35 -- common/autotest_common.sh@10 -- # set +x 00:05:43.162 18:56:35 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:44.538 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:44.538 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:44.538 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:44.538 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:44.538 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:44.538 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:44.538 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:44.538 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:44.538 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:44.538 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:44.797 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:44.797 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:44.797 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:44.797 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:44.797 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:44.797 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:45.735 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:05:45.735 18:56:38 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:45.735 18:56:38 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:45.735 18:56:38 -- common/autotest_common.sh@10 -- # set +x 00:05:45.735 18:56:38 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:45.735 18:56:38 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:45.735 18:56:38 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:45.735 18:56:38 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:45.735 18:56:38 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:45.735 18:56:38 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:45.735 18:56:38 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:45.735 18:56:38 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:45.735 18:56:38 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:45.735 18:56:38 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:45.735 18:56:38 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:45.735 18:56:38 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:45.735 18:56:38 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:05:45.735 18:56:38 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:45.735 18:56:38 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:05:45.735 18:56:38 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:45.735 18:56:38 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:45.735 18:56:38 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:45.735 18:56:38 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:0b:00.0 00:05:45.735 18:56:38 -- common/autotest_common.sh@1592 -- # [[ -z 0000:0b:00.0 ]] 00:05:45.735 18:56:38 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=769594 00:05:45.735 18:56:38 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.735 18:56:38 -- common/autotest_common.sh@1598 -- # waitforlisten 769594 00:05:45.735 18:56:38 -- common/autotest_common.sh@831 -- # '[' -z 769594 ']' 00:05:45.735 18:56:38 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.735 18:56:38 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.735 18:56:38 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.735 18:56:38 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.735 18:56:38 -- common/autotest_common.sh@10 -- # set +x 00:05:45.994 [2024-07-25 18:56:38.246652] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:45.994 [2024-07-25 18:56:38.246731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid769594 ] 00:05:45.994 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.994 [2024-07-25 18:56:38.312131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.994 [2024-07-25 18:56:38.422078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.252 18:56:38 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.252 18:56:38 -- common/autotest_common.sh@864 -- # return 0 00:05:46.252 18:56:38 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:46.252 18:56:38 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:46.252 18:56:38 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:05:49.537 nvme0n1 00:05:49.537 18:56:41 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:49.537 [2024-07-25 18:56:41.998541] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:49.537 [2024-07-25 18:56:41.998583] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:49.537 request: 00:05:49.537 { 00:05:49.537 "nvme_ctrlr_name": "nvme0", 00:05:49.537 "password": "test", 00:05:49.537 "method": "bdev_nvme_opal_revert", 00:05:49.537 "req_id": 1 00:05:49.537 } 00:05:49.537 Got JSON-RPC error response 00:05:49.537 response: 00:05:49.537 { 00:05:49.537 "code": -32603, 00:05:49.537 "message": "Internal error" 00:05:49.537 } 00:05:49.795 18:56:42 -- common/autotest_common.sh@1604 -- # true 00:05:49.795 18:56:42 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:49.795 18:56:42 -- common/autotest_common.sh@1608 -- # killprocess 769594 00:05:49.795 18:56:42 -- common/autotest_common.sh@950 -- # '[' -z 769594 ']' 00:05:49.795 18:56:42 -- common/autotest_common.sh@954 -- # kill -0 769594 00:05:49.795 18:56:42 -- common/autotest_common.sh@955 -- # uname 00:05:49.795 18:56:42 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.795 18:56:42 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 769594 00:05:49.795 18:56:42 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.795 18:56:42 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.795 18:56:42 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 769594' 00:05:49.795 killing process with pid 769594 00:05:49.795 18:56:42 -- common/autotest_common.sh@969 -- # kill 769594 00:05:49.795 18:56:42 -- common/autotest_common.sh@974 -- # wait 769594 00:05:51.713 18:56:43 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:51.713 18:56:43 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:51.713 18:56:43 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:51.713 18:56:43 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:51.713 18:56:43 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:51.713 18:56:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:51.713 18:56:43 -- common/autotest_common.sh@10 -- # set +x 00:05:51.713 18:56:43 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:51.713 18:56:43 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:51.713 18:56:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.713 18:56:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.713 18:56:43 -- common/autotest_common.sh@10 -- # set +x 00:05:51.713 ************************************ 00:05:51.713 START TEST env 00:05:51.713 ************************************ 00:05:51.713 18:56:43 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:51.713 * Looking for test storage... 00:05:51.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:51.713 18:56:43 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:51.713 18:56:43 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.713 18:56:43 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.713 18:56:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:51.713 ************************************ 00:05:51.713 START TEST env_memory 00:05:51.713 ************************************ 00:05:51.713 18:56:43 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:51.713 00:05:51.713 00:05:51.713 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.713 http://cunit.sourceforge.net/ 00:05:51.713 00:05:51.713 00:05:51.713 Suite: memory 00:05:51.713 Test: alloc and free memory map ...[2024-07-25 18:56:43.948848] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:51.713 passed 00:05:51.713 Test: mem map translation ...[2024-07-25 18:56:43.969842] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:51.713 [2024-07-25 18:56:43.969863] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:51.713 [2024-07-25 18:56:43.969918] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:51.713 [2024-07-25 18:56:43.969931] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:51.713 passed 00:05:51.713 Test: mem map registration ...[2024-07-25 18:56:44.010667] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:51.713 [2024-07-25 18:56:44.010687] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:51.713 passed 00:05:51.714 Test: mem map adjacent registrations ...passed 00:05:51.714 00:05:51.714 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.714 suites 1 1 n/a 0 0 00:05:51.714 tests 4 4 4 0 0 00:05:51.714 asserts 152 152 152 0 n/a 00:05:51.714 00:05:51.714 Elapsed time = 0.142 seconds 00:05:51.714 00:05:51.714 real 0m0.150s 00:05:51.714 user 0m0.142s 00:05:51.714 sys 0m0.008s 00:05:51.714 18:56:44 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.714 18:56:44 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:51.714 ************************************ 00:05:51.714 END TEST env_memory 00:05:51.714 ************************************ 00:05:51.714 18:56:44 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:51.714 18:56:44 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.714 18:56:44 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.714 18:56:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:51.714 ************************************ 00:05:51.714 START TEST env_vtophys 00:05:51.714 ************************************ 00:05:51.714 18:56:44 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:51.714 EAL: lib.eal log level changed from notice to debug 00:05:51.714 EAL: Detected lcore 0 as core 0 on socket 0 00:05:51.714 EAL: Detected lcore 1 as core 1 on socket 0 00:05:51.714 EAL: Detected lcore 2 as core 2 on socket 0 00:05:51.714 EAL: Detected lcore 3 as core 3 on socket 0 00:05:51.714 EAL: Detected lcore 4 as core 4 on socket 0 00:05:51.714 EAL: Detected lcore 5 as core 5 on socket 0 00:05:51.714 EAL: Detected lcore 6 as core 8 on socket 0 00:05:51.714 EAL: Detected lcore 7 as core 9 on socket 0 00:05:51.714 EAL: Detected lcore 8 as core 10 on socket 0 00:05:51.714 EAL: Detected lcore 9 as core 11 on socket 0 00:05:51.714 EAL: Detected lcore 10 as core 12 on socket 0 00:05:51.714 EAL: Detected lcore 11 as core 13 on socket 0 00:05:51.714 EAL: Detected lcore 12 as core 0 on socket 1 00:05:51.714 EAL: Detected lcore 13 as core 1 on socket 1 00:05:51.714 EAL: Detected lcore 14 as core 2 on socket 1 00:05:51.714 EAL: Detected lcore 15 as core 3 on socket 1 00:05:51.714 EAL: Detected lcore 16 as core 4 on socket 1 00:05:51.714 EAL: Detected lcore 17 as core 5 on socket 1 00:05:51.714 EAL: Detected lcore 18 as core 8 on socket 1 00:05:51.714 EAL: Detected lcore 19 as core 9 on socket 1 00:05:51.714 EAL: Detected lcore 20 as core 10 on socket 1 00:05:51.714 EAL: Detected lcore 21 as core 11 on socket 1 00:05:51.714 EAL: Detected lcore 22 as core 12 on socket 1 00:05:51.714 EAL: Detected lcore 23 as core 13 on socket 1 00:05:51.714 EAL: Detected lcore 24 as core 0 on socket 0 00:05:51.714 EAL: Detected lcore 25 as core 1 on socket 0 00:05:51.714 EAL: Detected lcore 26 as core 2 on socket 0 00:05:51.714 EAL: Detected lcore 27 as core 3 on socket 0 00:05:51.714 EAL: Detected lcore 28 as core 4 on socket 0 00:05:51.714 EAL: Detected lcore 29 as core 5 on socket 0 00:05:51.714 EAL: Detected lcore 30 as core 8 on socket 0 00:05:51.714 EAL: Detected lcore 31 as core 9 on socket 0 00:05:51.714 EAL: Detected lcore 32 as core 10 on socket 0 00:05:51.714 EAL: Detected lcore 33 as core 11 on socket 0 00:05:51.714 EAL: Detected lcore 34 as core 12 on socket 0 00:05:51.714 EAL: Detected lcore 35 as core 13 on socket 0 00:05:51.714 EAL: Detected lcore 36 as core 0 on socket 1 00:05:51.714 EAL: Detected lcore 37 as core 1 on socket 1 00:05:51.714 EAL: Detected lcore 38 as core 2 on socket 1 00:05:51.714 EAL: Detected lcore 39 as core 3 on socket 1 00:05:51.714 EAL: Detected lcore 40 as core 4 on socket 1 00:05:51.714 EAL: Detected lcore 41 as core 5 on socket 1 00:05:51.714 EAL: Detected lcore 42 as core 8 on socket 1 00:05:51.714 EAL: Detected lcore 43 as core 9 on socket 1 00:05:51.714 EAL: Detected lcore 44 as core 10 on socket 1 00:05:51.714 EAL: Detected lcore 45 as core 11 on socket 1 00:05:51.714 EAL: Detected lcore 46 as core 12 on socket 1 00:05:51.714 EAL: Detected lcore 47 as core 13 on socket 1 00:05:51.714 EAL: Maximum logical cores by configuration: 128 00:05:51.714 EAL: Detected CPU lcores: 48 00:05:51.714 EAL: Detected NUMA nodes: 2 00:05:51.714 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:51.714 EAL: Detected shared linkage of DPDK 00:05:51.714 EAL: No shared files mode enabled, IPC will be disabled 00:05:51.714 EAL: Bus pci wants IOVA as 'DC' 00:05:51.714 EAL: Buses did not request a specific IOVA mode. 00:05:51.714 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:51.714 EAL: Selected IOVA mode 'VA' 00:05:51.714 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.714 EAL: Probing VFIO support... 00:05:51.714 EAL: IOMMU type 1 (Type 1) is supported 00:05:51.714 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:51.714 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:51.714 EAL: VFIO support initialized 00:05:51.714 EAL: Ask a virtual area of 0x2e000 bytes 00:05:51.714 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:51.714 EAL: Setting up physically contiguous memory... 00:05:51.714 EAL: Setting maximum number of open files to 524288 00:05:51.714 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:51.714 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:51.714 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:51.714 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.714 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:51.714 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:51.714 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.714 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:51.714 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:51.714 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.714 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:51.714 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:51.714 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.714 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:51.714 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:51.714 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.714 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:51.714 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:51.714 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.714 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:51.714 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:51.714 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.714 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:51.714 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:51.714 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.714 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:51.714 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:51.714 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:51.714 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.714 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:51.714 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:51.714 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.714 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:51.714 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:51.714 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.714 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:51.714 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:51.714 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.714 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:51.714 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:51.714 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.714 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:51.714 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:51.714 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.714 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:51.714 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:51.714 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.714 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:51.714 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:51.714 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.714 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:51.714 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:51.714 EAL: Hugepages will be freed exactly as allocated. 00:05:51.714 EAL: No shared files mode enabled, IPC is disabled 00:05:51.714 EAL: No shared files mode enabled, IPC is disabled 00:05:51.714 EAL: TSC frequency is ~2700000 KHz 00:05:51.714 EAL: Main lcore 0 is ready (tid=7f930b675a00;cpuset=[0]) 00:05:51.714 EAL: Trying to obtain current memory policy. 00:05:51.714 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.714 EAL: Restoring previous memory policy: 0 00:05:51.714 EAL: request: mp_malloc_sync 00:05:51.714 EAL: No shared files mode enabled, IPC is disabled 00:05:51.714 EAL: Heap on socket 0 was expanded by 2MB 00:05:51.714 EAL: No shared files mode enabled, IPC is disabled 00:05:51.987 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:51.987 EAL: Mem event callback 'spdk:(nil)' registered 00:05:51.987 00:05:51.987 00:05:51.987 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.987 http://cunit.sourceforge.net/ 00:05:51.987 00:05:51.987 00:05:51.987 Suite: components_suite 00:05:51.987 Test: vtophys_malloc_test ...passed 00:05:51.987 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:51.988 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.988 EAL: Restoring previous memory policy: 4 00:05:51.988 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.988 EAL: request: mp_malloc_sync 00:05:51.988 EAL: No shared files mode enabled, IPC is disabled 00:05:51.988 EAL: Heap on socket 0 was expanded by 4MB 00:05:51.988 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.988 EAL: request: mp_malloc_sync 00:05:51.988 EAL: No shared files mode enabled, IPC is disabled 00:05:51.988 EAL: Heap on socket 0 was shrunk by 4MB 00:05:51.988 EAL: Trying to obtain current memory policy. 00:05:51.988 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.988 EAL: Restoring previous memory policy: 4 00:05:51.988 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.988 EAL: request: mp_malloc_sync 00:05:51.988 EAL: No shared files mode enabled, IPC is disabled 00:05:51.988 EAL: Heap on socket 0 was expanded by 6MB 00:05:51.988 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.988 EAL: request: mp_malloc_sync 00:05:51.988 EAL: No shared files mode enabled, IPC is disabled 00:05:51.988 EAL: Heap on socket 0 was shrunk by 6MB 00:05:51.988 EAL: Trying to obtain current memory policy. 00:05:51.988 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.988 EAL: Restoring previous memory policy: 4 00:05:51.988 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.988 EAL: request: mp_malloc_sync 00:05:51.988 EAL: No shared files mode enabled, IPC is disabled 00:05:51.988 EAL: Heap on socket 0 was expanded by 10MB 00:05:51.988 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.988 EAL: request: mp_malloc_sync 00:05:51.988 EAL: No shared files mode enabled, IPC is disabled 00:05:51.988 EAL: Heap on socket 0 was shrunk by 10MB 00:05:51.988 EAL: Trying to obtain current memory policy. 00:05:51.988 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.988 EAL: Restoring previous memory policy: 4 00:05:51.988 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.988 EAL: request: mp_malloc_sync 00:05:51.988 EAL: No shared files mode enabled, IPC is disabled 00:05:51.988 EAL: Heap on socket 0 was expanded by 18MB 00:05:51.988 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.988 EAL: request: mp_malloc_sync 00:05:51.988 EAL: No shared files mode enabled, IPC is disabled 00:05:51.988 EAL: Heap on socket 0 was shrunk by 18MB 00:05:51.988 EAL: Trying to obtain current memory policy. 00:05:51.988 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.988 EAL: Restoring previous memory policy: 4 00:05:51.988 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.988 EAL: request: mp_malloc_sync 00:05:51.988 EAL: No shared files mode enabled, IPC is disabled 00:05:51.988 EAL: Heap on socket 0 was expanded by 34MB 00:05:51.988 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.988 EAL: request: mp_malloc_sync 00:05:51.988 EAL: No shared files mode enabled, IPC is disabled 00:05:51.988 EAL: Heap on socket 0 was shrunk by 34MB 00:05:51.988 EAL: Trying to obtain current memory policy. 00:05:51.988 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.988 EAL: Restoring previous memory policy: 4 00:05:51.988 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.988 EAL: request: mp_malloc_sync 00:05:51.988 EAL: No shared files mode enabled, IPC is disabled 00:05:51.988 EAL: Heap on socket 0 was expanded by 66MB 00:05:51.988 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.988 EAL: request: mp_malloc_sync 00:05:51.988 EAL: No shared files mode enabled, IPC is disabled 00:05:51.988 EAL: Heap on socket 0 was shrunk by 66MB 00:05:51.988 EAL: Trying to obtain current memory policy. 00:05:51.988 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.988 EAL: Restoring previous memory policy: 4 00:05:51.988 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.988 EAL: request: mp_malloc_sync 00:05:51.988 EAL: No shared files mode enabled, IPC is disabled 00:05:51.988 EAL: Heap on socket 0 was expanded by 130MB 00:05:51.988 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.988 EAL: request: mp_malloc_sync 00:05:51.988 EAL: No shared files mode enabled, IPC is disabled 00:05:51.988 EAL: Heap on socket 0 was shrunk by 130MB 00:05:51.988 EAL: Trying to obtain current memory policy. 00:05:51.988 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.988 EAL: Restoring previous memory policy: 4 00:05:51.988 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.988 EAL: request: mp_malloc_sync 00:05:51.988 EAL: No shared files mode enabled, IPC is disabled 00:05:51.988 EAL: Heap on socket 0 was expanded by 258MB 00:05:52.245 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.245 EAL: request: mp_malloc_sync 00:05:52.245 EAL: No shared files mode enabled, IPC is disabled 00:05:52.245 EAL: Heap on socket 0 was shrunk by 258MB 00:05:52.245 EAL: Trying to obtain current memory policy. 00:05:52.245 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.245 EAL: Restoring previous memory policy: 4 00:05:52.245 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.245 EAL: request: mp_malloc_sync 00:05:52.245 EAL: No shared files mode enabled, IPC is disabled 00:05:52.245 EAL: Heap on socket 0 was expanded by 514MB 00:05:52.503 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.503 EAL: request: mp_malloc_sync 00:05:52.503 EAL: No shared files mode enabled, IPC is disabled 00:05:52.503 EAL: Heap on socket 0 was shrunk by 514MB 00:05:52.503 EAL: Trying to obtain current memory policy. 00:05:52.503 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.762 EAL: Restoring previous memory policy: 4 00:05:52.762 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.762 EAL: request: mp_malloc_sync 00:05:52.762 EAL: No shared files mode enabled, IPC is disabled 00:05:52.762 EAL: Heap on socket 0 was expanded by 1026MB 00:05:53.020 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.278 EAL: request: mp_malloc_sync 00:05:53.278 EAL: No shared files mode enabled, IPC is disabled 00:05:53.278 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:53.278 passed 00:05:53.278 00:05:53.278 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.278 suites 1 1 n/a 0 0 00:05:53.278 tests 2 2 2 0 0 00:05:53.278 asserts 497 497 497 0 n/a 00:05:53.278 00:05:53.278 Elapsed time = 1.422 seconds 00:05:53.278 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.278 EAL: request: mp_malloc_sync 00:05:53.278 EAL: No shared files mode enabled, IPC is disabled 00:05:53.278 EAL: Heap on socket 0 was shrunk by 2MB 00:05:53.278 EAL: No shared files mode enabled, IPC is disabled 00:05:53.278 EAL: No shared files mode enabled, IPC is disabled 00:05:53.278 EAL: No shared files mode enabled, IPC is disabled 00:05:53.278 00:05:53.278 real 0m1.551s 00:05:53.278 user 0m0.887s 00:05:53.278 sys 0m0.630s 00:05:53.278 18:56:45 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.278 18:56:45 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:53.278 ************************************ 00:05:53.278 END TEST env_vtophys 00:05:53.278 ************************************ 00:05:53.278 18:56:45 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:53.278 18:56:45 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.278 18:56:45 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.278 18:56:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.278 ************************************ 00:05:53.278 START TEST env_pci 00:05:53.278 ************************************ 00:05:53.278 18:56:45 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:53.278 00:05:53.278 00:05:53.278 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.278 http://cunit.sourceforge.net/ 00:05:53.278 00:05:53.278 00:05:53.278 Suite: pci 00:05:53.278 Test: pci_hook ...[2024-07-25 18:56:45.721878] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 770486 has claimed it 00:05:53.537 EAL: Cannot find device (10000:00:01.0) 00:05:53.537 EAL: Failed to attach device on primary process 00:05:53.537 passed 00:05:53.537 00:05:53.537 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.537 suites 1 1 n/a 0 0 00:05:53.537 tests 1 1 1 0 0 00:05:53.537 asserts 25 25 25 0 n/a 00:05:53.537 00:05:53.537 Elapsed time = 0.027 seconds 00:05:53.537 00:05:53.537 real 0m0.041s 00:05:53.537 user 0m0.015s 00:05:53.537 sys 0m0.026s 00:05:53.537 18:56:45 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.537 18:56:45 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:53.537 ************************************ 00:05:53.537 END TEST env_pci 00:05:53.537 ************************************ 00:05:53.537 18:56:45 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:53.537 18:56:45 env -- env/env.sh@15 -- # uname 00:05:53.537 18:56:45 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:53.537 18:56:45 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:53.537 18:56:45 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:53.537 18:56:45 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:53.537 18:56:45 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.537 18:56:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.537 ************************************ 00:05:53.537 START TEST env_dpdk_post_init 00:05:53.537 ************************************ 00:05:53.537 18:56:45 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:53.537 EAL: Detected CPU lcores: 48 00:05:53.537 EAL: Detected NUMA nodes: 2 00:05:53.537 EAL: Detected shared linkage of DPDK 00:05:53.537 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:53.537 EAL: Selected IOVA mode 'VA' 00:05:53.537 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.537 EAL: VFIO support initialized 00:05:53.537 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:53.537 EAL: Using IOMMU type 1 (Type 1) 00:05:53.537 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:53.537 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:53.537 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:53.537 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:53.537 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:53.537 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:53.537 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:53.796 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:54.364 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:05:54.364 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:54.364 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:54.364 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:54.364 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:54.364 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:54.622 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:54.622 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:54.622 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:57.902 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:05:57.902 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:05:57.902 Starting DPDK initialization... 00:05:57.902 Starting SPDK post initialization... 00:05:57.902 SPDK NVMe probe 00:05:57.902 Attaching to 0000:0b:00.0 00:05:57.902 Attached to 0000:0b:00.0 00:05:57.902 Cleaning up... 00:05:57.902 00:05:57.902 real 0m4.405s 00:05:57.902 user 0m3.254s 00:05:57.902 sys 0m0.205s 00:05:57.902 18:56:50 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.902 18:56:50 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:57.902 ************************************ 00:05:57.902 END TEST env_dpdk_post_init 00:05:57.902 ************************************ 00:05:57.902 18:56:50 env -- env/env.sh@26 -- # uname 00:05:57.902 18:56:50 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:57.902 18:56:50 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:57.902 18:56:50 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.902 18:56:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.902 18:56:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:57.902 ************************************ 00:05:57.902 START TEST env_mem_callbacks 00:05:57.902 ************************************ 00:05:57.902 18:56:50 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:57.902 EAL: Detected CPU lcores: 48 00:05:57.902 EAL: Detected NUMA nodes: 2 00:05:57.902 EAL: Detected shared linkage of DPDK 00:05:57.902 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:57.902 EAL: Selected IOVA mode 'VA' 00:05:57.902 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.902 EAL: VFIO support initialized 00:05:57.902 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:57.902 00:05:57.902 00:05:57.902 CUnit - A unit testing framework for C - Version 2.1-3 00:05:57.902 http://cunit.sourceforge.net/ 00:05:57.902 00:05:57.902 00:05:57.902 Suite: memory 00:05:57.902 Test: test ... 00:05:57.902 register 0x200000200000 2097152 00:05:57.902 malloc 3145728 00:05:57.902 register 0x200000400000 4194304 00:05:57.902 buf 0x200000500000 len 3145728 PASSED 00:05:57.902 malloc 64 00:05:57.902 buf 0x2000004fff40 len 64 PASSED 00:05:57.902 malloc 4194304 00:05:57.902 register 0x200000800000 6291456 00:05:57.902 buf 0x200000a00000 len 4194304 PASSED 00:05:57.902 free 0x200000500000 3145728 00:05:57.902 free 0x2000004fff40 64 00:05:57.903 unregister 0x200000400000 4194304 PASSED 00:05:57.903 free 0x200000a00000 4194304 00:05:57.903 unregister 0x200000800000 6291456 PASSED 00:05:57.903 malloc 8388608 00:05:57.903 register 0x200000400000 10485760 00:05:57.903 buf 0x200000600000 len 8388608 PASSED 00:05:57.903 free 0x200000600000 8388608 00:05:57.903 unregister 0x200000400000 10485760 PASSED 00:05:57.903 passed 00:05:57.903 00:05:57.903 Run Summary: Type Total Ran Passed Failed Inactive 00:05:57.903 suites 1 1 n/a 0 0 00:05:57.903 tests 1 1 1 0 0 00:05:57.903 asserts 15 15 15 0 n/a 00:05:57.903 00:05:57.903 Elapsed time = 0.005 seconds 00:05:57.903 00:05:57.903 real 0m0.053s 00:05:57.903 user 0m0.016s 00:05:57.903 sys 0m0.037s 00:05:57.903 18:56:50 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.903 18:56:50 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:57.903 ************************************ 00:05:57.903 END TEST env_mem_callbacks 00:05:57.903 ************************************ 00:05:57.903 00:05:57.903 real 0m6.489s 00:05:57.903 user 0m4.422s 00:05:57.903 sys 0m1.105s 00:05:57.903 18:56:50 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.903 18:56:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:57.903 ************************************ 00:05:57.903 END TEST env 00:05:57.903 ************************************ 00:05:57.903 18:56:50 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:57.903 18:56:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.903 18:56:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.903 18:56:50 -- common/autotest_common.sh@10 -- # set +x 00:05:57.903 ************************************ 00:05:57.903 START TEST rpc 00:05:57.903 ************************************ 00:05:57.903 18:56:50 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:58.161 * Looking for test storage... 00:05:58.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:58.161 18:56:50 rpc -- rpc/rpc.sh@65 -- # spdk_pid=771153 00:05:58.162 18:56:50 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:58.162 18:56:50 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.162 18:56:50 rpc -- rpc/rpc.sh@67 -- # waitforlisten 771153 00:05:58.162 18:56:50 rpc -- common/autotest_common.sh@831 -- # '[' -z 771153 ']' 00:05:58.162 18:56:50 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.162 18:56:50 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.162 18:56:50 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.162 18:56:50 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.162 18:56:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.162 [2024-07-25 18:56:50.473760] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:58.162 [2024-07-25 18:56:50.473838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid771153 ] 00:05:58.162 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.162 [2024-07-25 18:56:50.540822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.420 [2024-07-25 18:56:50.648852] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:58.420 [2024-07-25 18:56:50.648905] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 771153' to capture a snapshot of events at runtime. 00:05:58.420 [2024-07-25 18:56:50.648931] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:58.420 [2024-07-25 18:56:50.648944] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:58.420 [2024-07-25 18:56:50.648956] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid771153 for offline analysis/debug. 00:05:58.420 [2024-07-25 18:56:50.648987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.679 18:56:50 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.679 18:56:50 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:58.679 18:56:50 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:58.679 18:56:50 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:58.679 18:56:50 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:58.679 18:56:50 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:58.679 18:56:50 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.679 18:56:50 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.679 18:56:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.679 ************************************ 00:05:58.679 START TEST rpc_integrity 00:05:58.679 ************************************ 00:05:58.679 18:56:50 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:58.679 18:56:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:58.679 18:56:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.679 18:56:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.679 18:56:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.679 18:56:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:58.679 18:56:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:58.679 18:56:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:58.680 18:56:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:58.680 18:56:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.680 18:56:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.680 18:56:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.680 18:56:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:58.680 18:56:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:58.680 18:56:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.680 18:56:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.680 18:56:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.680 18:56:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:58.680 { 00:05:58.680 "name": "Malloc0", 00:05:58.680 "aliases": [ 00:05:58.680 "bf40de8d-0c47-4d9e-a6e0-f8d3d08dace2" 00:05:58.680 ], 00:05:58.680 "product_name": "Malloc disk", 00:05:58.680 "block_size": 512, 00:05:58.680 "num_blocks": 16384, 00:05:58.680 "uuid": "bf40de8d-0c47-4d9e-a6e0-f8d3d08dace2", 00:05:58.680 "assigned_rate_limits": { 00:05:58.680 "rw_ios_per_sec": 0, 00:05:58.680 "rw_mbytes_per_sec": 0, 00:05:58.680 "r_mbytes_per_sec": 0, 00:05:58.680 "w_mbytes_per_sec": 0 00:05:58.680 }, 00:05:58.680 "claimed": false, 00:05:58.680 "zoned": false, 00:05:58.680 "supported_io_types": { 00:05:58.680 "read": true, 00:05:58.680 "write": true, 00:05:58.680 "unmap": true, 00:05:58.680 "flush": true, 00:05:58.680 "reset": true, 00:05:58.680 "nvme_admin": false, 00:05:58.680 "nvme_io": false, 00:05:58.680 "nvme_io_md": false, 00:05:58.680 "write_zeroes": true, 00:05:58.680 "zcopy": true, 00:05:58.680 "get_zone_info": false, 00:05:58.680 "zone_management": false, 00:05:58.680 "zone_append": false, 00:05:58.680 "compare": false, 00:05:58.680 "compare_and_write": false, 00:05:58.680 "abort": true, 00:05:58.680 "seek_hole": false, 00:05:58.680 "seek_data": false, 00:05:58.680 "copy": true, 00:05:58.680 "nvme_iov_md": false 00:05:58.680 }, 00:05:58.680 "memory_domains": [ 00:05:58.680 { 00:05:58.680 "dma_device_id": "system", 00:05:58.680 "dma_device_type": 1 00:05:58.680 }, 00:05:58.680 { 00:05:58.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.680 "dma_device_type": 2 00:05:58.680 } 00:05:58.680 ], 00:05:58.680 "driver_specific": {} 00:05:58.680 } 00:05:58.680 ]' 00:05:58.680 18:56:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:58.680 18:56:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:58.680 18:56:51 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:58.680 18:56:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.680 18:56:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.680 [2024-07-25 18:56:51.056387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:58.680 [2024-07-25 18:56:51.056432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:58.680 [2024-07-25 18:56:51.056458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c53d50 00:05:58.680 [2024-07-25 18:56:51.056474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:58.680 [2024-07-25 18:56:51.058007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:58.680 [2024-07-25 18:56:51.058036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:58.680 Passthru0 00:05:58.680 18:56:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.680 18:56:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:58.680 18:56:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.680 18:56:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.680 18:56:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.680 18:56:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:58.680 { 00:05:58.680 "name": "Malloc0", 00:05:58.680 "aliases": [ 00:05:58.680 "bf40de8d-0c47-4d9e-a6e0-f8d3d08dace2" 00:05:58.680 ], 00:05:58.680 "product_name": "Malloc disk", 00:05:58.680 "block_size": 512, 00:05:58.680 "num_blocks": 16384, 00:05:58.680 "uuid": "bf40de8d-0c47-4d9e-a6e0-f8d3d08dace2", 00:05:58.680 "assigned_rate_limits": { 00:05:58.680 "rw_ios_per_sec": 0, 00:05:58.680 "rw_mbytes_per_sec": 0, 00:05:58.680 "r_mbytes_per_sec": 0, 00:05:58.680 "w_mbytes_per_sec": 0 00:05:58.680 }, 00:05:58.680 "claimed": true, 00:05:58.680 "claim_type": "exclusive_write", 00:05:58.680 "zoned": false, 00:05:58.680 "supported_io_types": { 00:05:58.680 "read": true, 00:05:58.680 "write": true, 00:05:58.680 "unmap": true, 00:05:58.680 "flush": true, 00:05:58.680 "reset": true, 00:05:58.680 "nvme_admin": false, 00:05:58.680 "nvme_io": false, 00:05:58.680 "nvme_io_md": false, 00:05:58.680 "write_zeroes": true, 00:05:58.680 "zcopy": true, 00:05:58.680 "get_zone_info": false, 00:05:58.680 "zone_management": false, 00:05:58.680 "zone_append": false, 00:05:58.680 "compare": false, 00:05:58.680 "compare_and_write": false, 00:05:58.680 "abort": true, 00:05:58.680 "seek_hole": false, 00:05:58.680 "seek_data": false, 00:05:58.680 "copy": true, 00:05:58.680 "nvme_iov_md": false 00:05:58.680 }, 00:05:58.680 "memory_domains": [ 00:05:58.680 { 00:05:58.680 "dma_device_id": "system", 00:05:58.680 "dma_device_type": 1 00:05:58.680 }, 00:05:58.680 { 00:05:58.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.680 "dma_device_type": 2 00:05:58.680 } 00:05:58.680 ], 00:05:58.680 "driver_specific": {} 00:05:58.680 }, 00:05:58.680 { 00:05:58.680 "name": "Passthru0", 00:05:58.680 "aliases": [ 00:05:58.680 "bb926052-68fc-5e21-b9d5-0f7a2fac90d4" 00:05:58.680 ], 00:05:58.680 "product_name": "passthru", 00:05:58.680 "block_size": 512, 00:05:58.680 "num_blocks": 16384, 00:05:58.680 "uuid": "bb926052-68fc-5e21-b9d5-0f7a2fac90d4", 00:05:58.680 "assigned_rate_limits": { 00:05:58.680 "rw_ios_per_sec": 0, 00:05:58.680 "rw_mbytes_per_sec": 0, 00:05:58.680 "r_mbytes_per_sec": 0, 00:05:58.680 "w_mbytes_per_sec": 0 00:05:58.680 }, 00:05:58.680 "claimed": false, 00:05:58.680 "zoned": false, 00:05:58.680 "supported_io_types": { 00:05:58.680 "read": true, 00:05:58.680 "write": true, 00:05:58.680 "unmap": true, 00:05:58.680 "flush": true, 00:05:58.680 "reset": true, 00:05:58.680 "nvme_admin": false, 00:05:58.680 "nvme_io": false, 00:05:58.680 "nvme_io_md": false, 00:05:58.680 "write_zeroes": true, 00:05:58.680 "zcopy": true, 00:05:58.680 "get_zone_info": false, 00:05:58.680 "zone_management": false, 00:05:58.680 "zone_append": false, 00:05:58.680 "compare": false, 00:05:58.680 "compare_and_write": false, 00:05:58.680 "abort": true, 00:05:58.680 "seek_hole": false, 00:05:58.680 "seek_data": false, 00:05:58.680 "copy": true, 00:05:58.680 "nvme_iov_md": false 00:05:58.680 }, 00:05:58.680 "memory_domains": [ 00:05:58.680 { 00:05:58.680 "dma_device_id": "system", 00:05:58.680 "dma_device_type": 1 00:05:58.680 }, 00:05:58.680 { 00:05:58.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.680 "dma_device_type": 2 00:05:58.680 } 00:05:58.680 ], 00:05:58.680 "driver_specific": { 00:05:58.680 "passthru": { 00:05:58.680 "name": "Passthru0", 00:05:58.680 "base_bdev_name": "Malloc0" 00:05:58.680 } 00:05:58.680 } 00:05:58.680 } 00:05:58.680 ]' 00:05:58.680 18:56:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:58.680 18:56:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:58.680 18:56:51 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:58.680 18:56:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.680 18:56:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.680 18:56:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.680 18:56:51 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:58.680 18:56:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.680 18:56:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.680 18:56:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.680 18:56:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:58.680 18:56:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.680 18:56:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.680 18:56:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.680 18:56:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:58.680 18:56:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:58.939 18:56:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:58.939 00:05:58.939 real 0m0.232s 00:05:58.939 user 0m0.159s 00:05:58.939 sys 0m0.017s 00:05:58.939 18:56:51 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.939 18:56:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.939 ************************************ 00:05:58.939 END TEST rpc_integrity 00:05:58.939 ************************************ 00:05:58.939 18:56:51 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:58.939 18:56:51 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.939 18:56:51 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.939 18:56:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.939 ************************************ 00:05:58.939 START TEST rpc_plugins 00:05:58.939 ************************************ 00:05:58.939 18:56:51 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:58.939 18:56:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:58.939 18:56:51 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.939 18:56:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:58.939 18:56:51 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.939 18:56:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:58.939 18:56:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:58.939 18:56:51 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.939 18:56:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:58.939 18:56:51 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.939 18:56:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:58.939 { 00:05:58.939 "name": "Malloc1", 00:05:58.939 "aliases": [ 00:05:58.939 "dfd00335-76f5-4e7b-b4e2-19ddb59b5123" 00:05:58.939 ], 00:05:58.939 "product_name": "Malloc disk", 00:05:58.939 "block_size": 4096, 00:05:58.939 "num_blocks": 256, 00:05:58.939 "uuid": "dfd00335-76f5-4e7b-b4e2-19ddb59b5123", 00:05:58.939 "assigned_rate_limits": { 00:05:58.939 "rw_ios_per_sec": 0, 00:05:58.939 "rw_mbytes_per_sec": 0, 00:05:58.939 "r_mbytes_per_sec": 0, 00:05:58.939 "w_mbytes_per_sec": 0 00:05:58.939 }, 00:05:58.939 "claimed": false, 00:05:58.939 "zoned": false, 00:05:58.939 "supported_io_types": { 00:05:58.939 "read": true, 00:05:58.939 "write": true, 00:05:58.939 "unmap": true, 00:05:58.939 "flush": true, 00:05:58.939 "reset": true, 00:05:58.939 "nvme_admin": false, 00:05:58.939 "nvme_io": false, 00:05:58.939 "nvme_io_md": false, 00:05:58.939 "write_zeroes": true, 00:05:58.939 "zcopy": true, 00:05:58.939 "get_zone_info": false, 00:05:58.939 "zone_management": false, 00:05:58.939 "zone_append": false, 00:05:58.939 "compare": false, 00:05:58.939 "compare_and_write": false, 00:05:58.939 "abort": true, 00:05:58.939 "seek_hole": false, 00:05:58.939 "seek_data": false, 00:05:58.939 "copy": true, 00:05:58.939 "nvme_iov_md": false 00:05:58.939 }, 00:05:58.939 "memory_domains": [ 00:05:58.939 { 00:05:58.939 "dma_device_id": "system", 00:05:58.939 "dma_device_type": 1 00:05:58.939 }, 00:05:58.939 { 00:05:58.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.939 "dma_device_type": 2 00:05:58.939 } 00:05:58.939 ], 00:05:58.939 "driver_specific": {} 00:05:58.939 } 00:05:58.939 ]' 00:05:58.939 18:56:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:58.939 18:56:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:58.939 18:56:51 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:58.939 18:56:51 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.939 18:56:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:58.939 18:56:51 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.939 18:56:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:58.939 18:56:51 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.939 18:56:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:58.939 18:56:51 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.939 18:56:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:58.939 18:56:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:58.939 18:56:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:58.939 00:05:58.939 real 0m0.116s 00:05:58.939 user 0m0.076s 00:05:58.939 sys 0m0.010s 00:05:58.939 18:56:51 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.939 18:56:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:58.939 ************************************ 00:05:58.939 END TEST rpc_plugins 00:05:58.939 ************************************ 00:05:58.939 18:56:51 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:58.939 18:56:51 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.939 18:56:51 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.939 18:56:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.939 ************************************ 00:05:58.939 START TEST rpc_trace_cmd_test 00:05:58.939 ************************************ 00:05:58.939 18:56:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:58.939 18:56:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:58.939 18:56:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:58.939 18:56:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.939 18:56:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.939 18:56:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.939 18:56:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:58.939 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid771153", 00:05:58.939 "tpoint_group_mask": "0x8", 00:05:58.939 "iscsi_conn": { 00:05:58.939 "mask": "0x2", 00:05:58.939 "tpoint_mask": "0x0" 00:05:58.939 }, 00:05:58.939 "scsi": { 00:05:58.939 "mask": "0x4", 00:05:58.939 "tpoint_mask": "0x0" 00:05:58.939 }, 00:05:58.939 "bdev": { 00:05:58.939 "mask": "0x8", 00:05:58.939 "tpoint_mask": "0xffffffffffffffff" 00:05:58.939 }, 00:05:58.939 "nvmf_rdma": { 00:05:58.939 "mask": "0x10", 00:05:58.939 "tpoint_mask": "0x0" 00:05:58.939 }, 00:05:58.939 "nvmf_tcp": { 00:05:58.939 "mask": "0x20", 00:05:58.939 "tpoint_mask": "0x0" 00:05:58.939 }, 00:05:58.939 "ftl": { 00:05:58.939 "mask": "0x40", 00:05:58.939 "tpoint_mask": "0x0" 00:05:58.939 }, 00:05:58.939 "blobfs": { 00:05:58.939 "mask": "0x80", 00:05:58.939 "tpoint_mask": "0x0" 00:05:58.939 }, 00:05:58.939 "dsa": { 00:05:58.939 "mask": "0x200", 00:05:58.939 "tpoint_mask": "0x0" 00:05:58.939 }, 00:05:58.939 "thread": { 00:05:58.939 "mask": "0x400", 00:05:58.939 "tpoint_mask": "0x0" 00:05:58.939 }, 00:05:58.939 "nvme_pcie": { 00:05:58.939 "mask": "0x800", 00:05:58.939 "tpoint_mask": "0x0" 00:05:58.939 }, 00:05:58.939 "iaa": { 00:05:58.939 "mask": "0x1000", 00:05:58.939 "tpoint_mask": "0x0" 00:05:58.939 }, 00:05:58.939 "nvme_tcp": { 00:05:58.939 "mask": "0x2000", 00:05:58.939 "tpoint_mask": "0x0" 00:05:58.939 }, 00:05:58.939 "bdev_nvme": { 00:05:58.939 "mask": "0x4000", 00:05:58.939 "tpoint_mask": "0x0" 00:05:58.939 }, 00:05:58.939 "sock": { 00:05:58.939 "mask": "0x8000", 00:05:58.939 "tpoint_mask": "0x0" 00:05:58.939 } 00:05:58.939 }' 00:05:58.940 18:56:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:59.198 18:56:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:59.198 18:56:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:59.198 18:56:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:59.198 18:56:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:59.198 18:56:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:59.198 18:56:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:59.198 18:56:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:59.198 18:56:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:59.198 18:56:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:59.198 00:05:59.198 real 0m0.200s 00:05:59.198 user 0m0.176s 00:05:59.198 sys 0m0.016s 00:05:59.198 18:56:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.198 18:56:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:59.198 ************************************ 00:05:59.198 END TEST rpc_trace_cmd_test 00:05:59.198 ************************************ 00:05:59.198 18:56:51 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:59.198 18:56:51 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:59.198 18:56:51 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:59.198 18:56:51 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.198 18:56:51 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.198 18:56:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.198 ************************************ 00:05:59.198 START TEST rpc_daemon_integrity 00:05:59.198 ************************************ 00:05:59.198 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:59.198 18:56:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:59.198 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.198 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:59.198 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.198 18:56:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:59.198 18:56:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:59.457 18:56:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:59.457 18:56:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:59.457 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.457 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:59.457 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.457 18:56:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:59.457 18:56:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:59.457 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.457 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:59.457 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.457 18:56:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:59.457 { 00:05:59.457 "name": "Malloc2", 00:05:59.457 "aliases": [ 00:05:59.457 "3a1aea4a-3f9b-4b05-a4ed-c093f55c07f7" 00:05:59.457 ], 00:05:59.457 "product_name": "Malloc disk", 00:05:59.457 "block_size": 512, 00:05:59.457 "num_blocks": 16384, 00:05:59.457 "uuid": "3a1aea4a-3f9b-4b05-a4ed-c093f55c07f7", 00:05:59.457 "assigned_rate_limits": { 00:05:59.457 "rw_ios_per_sec": 0, 00:05:59.457 "rw_mbytes_per_sec": 0, 00:05:59.457 "r_mbytes_per_sec": 0, 00:05:59.457 "w_mbytes_per_sec": 0 00:05:59.457 }, 00:05:59.457 "claimed": false, 00:05:59.457 "zoned": false, 00:05:59.457 "supported_io_types": { 00:05:59.457 "read": true, 00:05:59.457 "write": true, 00:05:59.457 "unmap": true, 00:05:59.457 "flush": true, 00:05:59.457 "reset": true, 00:05:59.457 "nvme_admin": false, 00:05:59.457 "nvme_io": false, 00:05:59.457 "nvme_io_md": false, 00:05:59.457 "write_zeroes": true, 00:05:59.457 "zcopy": true, 00:05:59.457 "get_zone_info": false, 00:05:59.457 "zone_management": false, 00:05:59.457 "zone_append": false, 00:05:59.457 "compare": false, 00:05:59.457 "compare_and_write": false, 00:05:59.457 "abort": true, 00:05:59.457 "seek_hole": false, 00:05:59.457 "seek_data": false, 00:05:59.457 "copy": true, 00:05:59.457 "nvme_iov_md": false 00:05:59.457 }, 00:05:59.457 "memory_domains": [ 00:05:59.457 { 00:05:59.457 "dma_device_id": "system", 00:05:59.457 "dma_device_type": 1 00:05:59.457 }, 00:05:59.457 { 00:05:59.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:59.457 "dma_device_type": 2 00:05:59.457 } 00:05:59.457 ], 00:05:59.457 "driver_specific": {} 00:05:59.457 } 00:05:59.457 ]' 00:05:59.457 18:56:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:59.457 18:56:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:59.457 18:56:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:59.457 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.457 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:59.457 [2024-07-25 18:56:51.750625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:59.457 [2024-07-25 18:56:51.750678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:59.457 [2024-07-25 18:56:51.750709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c53980 00:05:59.457 [2024-07-25 18:56:51.750726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:59.457 [2024-07-25 18:56:51.752077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:59.457 [2024-07-25 18:56:51.752116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:59.457 Passthru0 00:05:59.457 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.457 18:56:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:59.457 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.457 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:59.457 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.457 18:56:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:59.457 { 00:05:59.457 "name": "Malloc2", 00:05:59.457 "aliases": [ 00:05:59.457 "3a1aea4a-3f9b-4b05-a4ed-c093f55c07f7" 00:05:59.457 ], 00:05:59.457 "product_name": "Malloc disk", 00:05:59.457 "block_size": 512, 00:05:59.457 "num_blocks": 16384, 00:05:59.457 "uuid": "3a1aea4a-3f9b-4b05-a4ed-c093f55c07f7", 00:05:59.457 "assigned_rate_limits": { 00:05:59.457 "rw_ios_per_sec": 0, 00:05:59.457 "rw_mbytes_per_sec": 0, 00:05:59.457 "r_mbytes_per_sec": 0, 00:05:59.457 "w_mbytes_per_sec": 0 00:05:59.457 }, 00:05:59.457 "claimed": true, 00:05:59.457 "claim_type": "exclusive_write", 00:05:59.457 "zoned": false, 00:05:59.457 "supported_io_types": { 00:05:59.457 "read": true, 00:05:59.457 "write": true, 00:05:59.457 "unmap": true, 00:05:59.457 "flush": true, 00:05:59.457 "reset": true, 00:05:59.457 "nvme_admin": false, 00:05:59.457 "nvme_io": false, 00:05:59.457 "nvme_io_md": false, 00:05:59.457 "write_zeroes": true, 00:05:59.457 "zcopy": true, 00:05:59.457 "get_zone_info": false, 00:05:59.457 "zone_management": false, 00:05:59.457 "zone_append": false, 00:05:59.457 "compare": false, 00:05:59.457 "compare_and_write": false, 00:05:59.457 "abort": true, 00:05:59.457 "seek_hole": false, 00:05:59.457 "seek_data": false, 00:05:59.457 "copy": true, 00:05:59.457 "nvme_iov_md": false 00:05:59.458 }, 00:05:59.458 "memory_domains": [ 00:05:59.458 { 00:05:59.458 "dma_device_id": "system", 00:05:59.458 "dma_device_type": 1 00:05:59.458 }, 00:05:59.458 { 00:05:59.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:59.458 "dma_device_type": 2 00:05:59.458 } 00:05:59.458 ], 00:05:59.458 "driver_specific": {} 00:05:59.458 }, 00:05:59.458 { 00:05:59.458 "name": "Passthru0", 00:05:59.458 "aliases": [ 00:05:59.458 "37baced1-5288-50ba-bbb0-f1c29f5ad2d3" 00:05:59.458 ], 00:05:59.458 "product_name": "passthru", 00:05:59.458 "block_size": 512, 00:05:59.458 "num_blocks": 16384, 00:05:59.458 "uuid": "37baced1-5288-50ba-bbb0-f1c29f5ad2d3", 00:05:59.458 "assigned_rate_limits": { 00:05:59.458 "rw_ios_per_sec": 0, 00:05:59.458 "rw_mbytes_per_sec": 0, 00:05:59.458 "r_mbytes_per_sec": 0, 00:05:59.458 "w_mbytes_per_sec": 0 00:05:59.458 }, 00:05:59.458 "claimed": false, 00:05:59.458 "zoned": false, 00:05:59.458 "supported_io_types": { 00:05:59.458 "read": true, 00:05:59.458 "write": true, 00:05:59.458 "unmap": true, 00:05:59.458 "flush": true, 00:05:59.458 "reset": true, 00:05:59.458 "nvme_admin": false, 00:05:59.458 "nvme_io": false, 00:05:59.458 "nvme_io_md": false, 00:05:59.458 "write_zeroes": true, 00:05:59.458 "zcopy": true, 00:05:59.458 "get_zone_info": false, 00:05:59.458 "zone_management": false, 00:05:59.458 "zone_append": false, 00:05:59.458 "compare": false, 00:05:59.458 "compare_and_write": false, 00:05:59.458 "abort": true, 00:05:59.458 "seek_hole": false, 00:05:59.458 "seek_data": false, 00:05:59.458 "copy": true, 00:05:59.458 "nvme_iov_md": false 00:05:59.458 }, 00:05:59.458 "memory_domains": [ 00:05:59.458 { 00:05:59.458 "dma_device_id": "system", 00:05:59.458 "dma_device_type": 1 00:05:59.458 }, 00:05:59.458 { 00:05:59.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:59.458 "dma_device_type": 2 00:05:59.458 } 00:05:59.458 ], 00:05:59.458 "driver_specific": { 00:05:59.458 "passthru": { 00:05:59.458 "name": "Passthru0", 00:05:59.458 "base_bdev_name": "Malloc2" 00:05:59.458 } 00:05:59.458 } 00:05:59.458 } 00:05:59.458 ]' 00:05:59.458 18:56:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:59.458 18:56:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:59.458 18:56:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:59.458 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.458 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:59.458 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.458 18:56:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:59.458 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.458 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:59.458 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.458 18:56:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:59.458 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.458 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:59.458 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.458 18:56:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:59.458 18:56:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:59.458 18:56:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:59.458 00:05:59.458 real 0m0.243s 00:05:59.458 user 0m0.167s 00:05:59.458 sys 0m0.018s 00:05:59.458 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.458 18:56:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:59.458 ************************************ 00:05:59.458 END TEST rpc_daemon_integrity 00:05:59.458 ************************************ 00:05:59.458 18:56:51 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:59.458 18:56:51 rpc -- rpc/rpc.sh@84 -- # killprocess 771153 00:05:59.458 18:56:51 rpc -- common/autotest_common.sh@950 -- # '[' -z 771153 ']' 00:05:59.458 18:56:51 rpc -- common/autotest_common.sh@954 -- # kill -0 771153 00:05:59.458 18:56:51 rpc -- common/autotest_common.sh@955 -- # uname 00:05:59.458 18:56:51 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.458 18:56:51 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 771153 00:05:59.458 18:56:51 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:59.458 18:56:51 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:59.458 18:56:51 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 771153' 00:05:59.458 killing process with pid 771153 00:05:59.458 18:56:51 rpc -- common/autotest_common.sh@969 -- # kill 771153 00:05:59.458 18:56:51 rpc -- common/autotest_common.sh@974 -- # wait 771153 00:06:00.025 00:06:00.025 real 0m2.023s 00:06:00.025 user 0m2.545s 00:06:00.025 sys 0m0.587s 00:06:00.025 18:56:52 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.025 18:56:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.025 ************************************ 00:06:00.025 END TEST rpc 00:06:00.025 ************************************ 00:06:00.025 18:56:52 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:00.025 18:56:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.025 18:56:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.025 18:56:52 -- common/autotest_common.sh@10 -- # set +x 00:06:00.025 ************************************ 00:06:00.025 START TEST skip_rpc 00:06:00.025 ************************************ 00:06:00.025 18:56:52 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:00.025 * Looking for test storage... 00:06:00.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:00.284 18:56:52 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:00.284 18:56:52 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:00.284 18:56:52 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:00.284 18:56:52 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.284 18:56:52 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.284 18:56:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.284 ************************************ 00:06:00.284 START TEST skip_rpc 00:06:00.284 ************************************ 00:06:00.284 18:56:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:00.284 18:56:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=771575 00:06:00.284 18:56:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:00.284 18:56:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.284 18:56:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:00.284 [2024-07-25 18:56:52.579790] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:00.284 [2024-07-25 18:56:52.579852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid771575 ] 00:06:00.284 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.284 [2024-07-25 18:56:52.650613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.542 [2024-07-25 18:56:52.775539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.802 18:56:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:05.802 18:56:57 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:05.802 18:56:57 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:05.802 18:56:57 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:05.802 18:56:57 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.802 18:56:57 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:05.802 18:56:57 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.802 18:56:57 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:05.802 18:56:57 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.802 18:56:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.802 18:56:57 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:05.802 18:56:57 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:05.802 18:56:57 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:05.802 18:56:57 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:05.802 18:56:57 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:05.802 18:56:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:05.802 18:56:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 771575 00:06:05.802 18:56:57 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 771575 ']' 00:06:05.802 18:56:57 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 771575 00:06:05.802 18:56:57 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:05.802 18:56:57 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.802 18:56:57 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 771575 00:06:05.802 18:56:57 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:05.802 18:56:57 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:05.803 18:56:57 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 771575' 00:06:05.803 killing process with pid 771575 00:06:05.803 18:56:57 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 771575 00:06:05.803 18:56:57 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 771575 00:06:05.803 00:06:05.803 real 0m5.509s 00:06:05.803 user 0m5.168s 00:06:05.803 sys 0m0.346s 00:06:05.803 18:56:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.803 18:56:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.803 ************************************ 00:06:05.803 END TEST skip_rpc 00:06:05.803 ************************************ 00:06:05.803 18:56:58 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:05.803 18:56:58 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.803 18:56:58 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.803 18:56:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.803 ************************************ 00:06:05.803 START TEST skip_rpc_with_json 00:06:05.803 ************************************ 00:06:05.803 18:56:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:05.803 18:56:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:05.803 18:56:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=772268 00:06:05.803 18:56:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.803 18:56:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.803 18:56:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 772268 00:06:05.803 18:56:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 772268 ']' 00:06:05.803 18:56:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.803 18:56:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.803 18:56:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.803 18:56:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.803 18:56:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:05.803 [2024-07-25 18:56:58.140092] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:05.803 [2024-07-25 18:56:58.140213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772268 ] 00:06:05.803 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.803 [2024-07-25 18:56:58.213360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.061 [2024-07-25 18:56:58.333563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.627 18:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.627 18:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:06.627 18:56:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:06.627 18:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.627 18:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:06.627 [2024-07-25 18:56:59.094398] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:06.885 request: 00:06:06.885 { 00:06:06.885 "trtype": "tcp", 00:06:06.885 "method": "nvmf_get_transports", 00:06:06.885 "req_id": 1 00:06:06.885 } 00:06:06.885 Got JSON-RPC error response 00:06:06.885 response: 00:06:06.885 { 00:06:06.885 "code": -19, 00:06:06.885 "message": "No such device" 00:06:06.885 } 00:06:06.885 18:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:06.885 18:56:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:06.885 18:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.885 18:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:06.885 [2024-07-25 18:56:59.102512] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:06.885 18:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.885 18:56:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:06.885 18:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.885 18:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:06.885 18:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.885 18:56:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:06.885 { 00:06:06.885 "subsystems": [ 00:06:06.885 { 00:06:06.885 "subsystem": "vfio_user_target", 00:06:06.885 "config": null 00:06:06.885 }, 00:06:06.885 { 00:06:06.885 "subsystem": "keyring", 00:06:06.885 "config": [] 00:06:06.885 }, 00:06:06.885 { 00:06:06.885 "subsystem": "iobuf", 00:06:06.885 "config": [ 00:06:06.885 { 00:06:06.885 "method": "iobuf_set_options", 00:06:06.885 "params": { 00:06:06.885 "small_pool_count": 8192, 00:06:06.885 "large_pool_count": 1024, 00:06:06.885 "small_bufsize": 8192, 00:06:06.885 "large_bufsize": 135168 00:06:06.885 } 00:06:06.885 } 00:06:06.885 ] 00:06:06.885 }, 00:06:06.885 { 00:06:06.885 "subsystem": "sock", 00:06:06.885 "config": [ 00:06:06.885 { 00:06:06.885 "method": "sock_set_default_impl", 00:06:06.885 "params": { 00:06:06.885 "impl_name": "posix" 00:06:06.885 } 00:06:06.885 }, 00:06:06.885 { 00:06:06.885 "method": "sock_impl_set_options", 00:06:06.885 "params": { 00:06:06.885 "impl_name": "ssl", 00:06:06.885 "recv_buf_size": 4096, 00:06:06.885 "send_buf_size": 4096, 00:06:06.885 "enable_recv_pipe": true, 00:06:06.885 "enable_quickack": false, 00:06:06.885 "enable_placement_id": 0, 00:06:06.885 "enable_zerocopy_send_server": true, 00:06:06.885 "enable_zerocopy_send_client": false, 00:06:06.885 "zerocopy_threshold": 0, 00:06:06.885 "tls_version": 0, 00:06:06.885 "enable_ktls": false 00:06:06.885 } 00:06:06.885 }, 00:06:06.885 { 00:06:06.885 "method": "sock_impl_set_options", 00:06:06.885 "params": { 00:06:06.885 "impl_name": "posix", 00:06:06.885 "recv_buf_size": 2097152, 00:06:06.885 "send_buf_size": 2097152, 00:06:06.885 "enable_recv_pipe": true, 00:06:06.885 "enable_quickack": false, 00:06:06.885 "enable_placement_id": 0, 00:06:06.885 "enable_zerocopy_send_server": true, 00:06:06.885 "enable_zerocopy_send_client": false, 00:06:06.885 "zerocopy_threshold": 0, 00:06:06.885 "tls_version": 0, 00:06:06.885 "enable_ktls": false 00:06:06.885 } 00:06:06.885 } 00:06:06.885 ] 00:06:06.885 }, 00:06:06.885 { 00:06:06.885 "subsystem": "vmd", 00:06:06.885 "config": [] 00:06:06.885 }, 00:06:06.885 { 00:06:06.885 "subsystem": "accel", 00:06:06.885 "config": [ 00:06:06.885 { 00:06:06.885 "method": "accel_set_options", 00:06:06.885 "params": { 00:06:06.885 "small_cache_size": 128, 00:06:06.885 "large_cache_size": 16, 00:06:06.885 "task_count": 2048, 00:06:06.885 "sequence_count": 2048, 00:06:06.885 "buf_count": 2048 00:06:06.885 } 00:06:06.885 } 00:06:06.885 ] 00:06:06.885 }, 00:06:06.885 { 00:06:06.885 "subsystem": "bdev", 00:06:06.885 "config": [ 00:06:06.885 { 00:06:06.885 "method": "bdev_set_options", 00:06:06.885 "params": { 00:06:06.885 "bdev_io_pool_size": 65535, 00:06:06.885 "bdev_io_cache_size": 256, 00:06:06.885 "bdev_auto_examine": true, 00:06:06.885 "iobuf_small_cache_size": 128, 00:06:06.885 "iobuf_large_cache_size": 16 00:06:06.885 } 00:06:06.885 }, 00:06:06.885 { 00:06:06.885 "method": "bdev_raid_set_options", 00:06:06.885 "params": { 00:06:06.885 "process_window_size_kb": 1024, 00:06:06.885 "process_max_bandwidth_mb_sec": 0 00:06:06.885 } 00:06:06.885 }, 00:06:06.885 { 00:06:06.885 "method": "bdev_iscsi_set_options", 00:06:06.885 "params": { 00:06:06.885 "timeout_sec": 30 00:06:06.885 } 00:06:06.885 }, 00:06:06.885 { 00:06:06.885 "method": "bdev_nvme_set_options", 00:06:06.886 "params": { 00:06:06.886 "action_on_timeout": "none", 00:06:06.886 "timeout_us": 0, 00:06:06.886 "timeout_admin_us": 0, 00:06:06.886 "keep_alive_timeout_ms": 10000, 00:06:06.886 "arbitration_burst": 0, 00:06:06.886 "low_priority_weight": 0, 00:06:06.886 "medium_priority_weight": 0, 00:06:06.886 "high_priority_weight": 0, 00:06:06.886 "nvme_adminq_poll_period_us": 10000, 00:06:06.886 "nvme_ioq_poll_period_us": 0, 00:06:06.886 "io_queue_requests": 0, 00:06:06.886 "delay_cmd_submit": true, 00:06:06.886 "transport_retry_count": 4, 00:06:06.886 "bdev_retry_count": 3, 00:06:06.886 "transport_ack_timeout": 0, 00:06:06.886 "ctrlr_loss_timeout_sec": 0, 00:06:06.886 "reconnect_delay_sec": 0, 00:06:06.886 "fast_io_fail_timeout_sec": 0, 00:06:06.886 "disable_auto_failback": false, 00:06:06.886 "generate_uuids": false, 00:06:06.886 "transport_tos": 0, 00:06:06.886 "nvme_error_stat": false, 00:06:06.886 "rdma_srq_size": 0, 00:06:06.886 "io_path_stat": false, 00:06:06.886 "allow_accel_sequence": false, 00:06:06.886 "rdma_max_cq_size": 0, 00:06:06.886 "rdma_cm_event_timeout_ms": 0, 00:06:06.886 "dhchap_digests": [ 00:06:06.886 "sha256", 00:06:06.886 "sha384", 00:06:06.886 "sha512" 00:06:06.886 ], 00:06:06.886 "dhchap_dhgroups": [ 00:06:06.886 "null", 00:06:06.886 "ffdhe2048", 00:06:06.886 "ffdhe3072", 00:06:06.886 "ffdhe4096", 00:06:06.886 "ffdhe6144", 00:06:06.886 "ffdhe8192" 00:06:06.886 ] 00:06:06.886 } 00:06:06.886 }, 00:06:06.886 { 00:06:06.886 "method": "bdev_nvme_set_hotplug", 00:06:06.886 "params": { 00:06:06.886 "period_us": 100000, 00:06:06.886 "enable": false 00:06:06.886 } 00:06:06.886 }, 00:06:06.886 { 00:06:06.886 "method": "bdev_wait_for_examine" 00:06:06.886 } 00:06:06.886 ] 00:06:06.886 }, 00:06:06.886 { 00:06:06.886 "subsystem": "scsi", 00:06:06.886 "config": null 00:06:06.886 }, 00:06:06.886 { 00:06:06.886 "subsystem": "scheduler", 00:06:06.886 "config": [ 00:06:06.886 { 00:06:06.886 "method": "framework_set_scheduler", 00:06:06.886 "params": { 00:06:06.886 "name": "static" 00:06:06.886 } 00:06:06.886 } 00:06:06.886 ] 00:06:06.886 }, 00:06:06.886 { 00:06:06.886 "subsystem": "vhost_scsi", 00:06:06.886 "config": [] 00:06:06.886 }, 00:06:06.886 { 00:06:06.886 "subsystem": "vhost_blk", 00:06:06.886 "config": [] 00:06:06.886 }, 00:06:06.886 { 00:06:06.886 "subsystem": "ublk", 00:06:06.886 "config": [] 00:06:06.886 }, 00:06:06.886 { 00:06:06.886 "subsystem": "nbd", 00:06:06.886 "config": [] 00:06:06.886 }, 00:06:06.886 { 00:06:06.886 "subsystem": "nvmf", 00:06:06.886 "config": [ 00:06:06.886 { 00:06:06.886 "method": "nvmf_set_config", 00:06:06.886 "params": { 00:06:06.886 "discovery_filter": "match_any", 00:06:06.886 "admin_cmd_passthru": { 00:06:06.886 "identify_ctrlr": false 00:06:06.886 } 00:06:06.886 } 00:06:06.886 }, 00:06:06.886 { 00:06:06.886 "method": "nvmf_set_max_subsystems", 00:06:06.886 "params": { 00:06:06.886 "max_subsystems": 1024 00:06:06.886 } 00:06:06.886 }, 00:06:06.886 { 00:06:06.886 "method": "nvmf_set_crdt", 00:06:06.886 "params": { 00:06:06.886 "crdt1": 0, 00:06:06.886 "crdt2": 0, 00:06:06.886 "crdt3": 0 00:06:06.886 } 00:06:06.886 }, 00:06:06.886 { 00:06:06.886 "method": "nvmf_create_transport", 00:06:06.886 "params": { 00:06:06.886 "trtype": "TCP", 00:06:06.886 "max_queue_depth": 128, 00:06:06.886 "max_io_qpairs_per_ctrlr": 127, 00:06:06.886 "in_capsule_data_size": 4096, 00:06:06.886 "max_io_size": 131072, 00:06:06.886 "io_unit_size": 131072, 00:06:06.886 "max_aq_depth": 128, 00:06:06.886 "num_shared_buffers": 511, 00:06:06.886 "buf_cache_size": 4294967295, 00:06:06.886 "dif_insert_or_strip": false, 00:06:06.886 "zcopy": false, 00:06:06.886 "c2h_success": true, 00:06:06.886 "sock_priority": 0, 00:06:06.886 "abort_timeout_sec": 1, 00:06:06.886 "ack_timeout": 0, 00:06:06.886 "data_wr_pool_size": 0 00:06:06.886 } 00:06:06.886 } 00:06:06.886 ] 00:06:06.886 }, 00:06:06.886 { 00:06:06.886 "subsystem": "iscsi", 00:06:06.886 "config": [ 00:06:06.886 { 00:06:06.886 "method": "iscsi_set_options", 00:06:06.886 "params": { 00:06:06.886 "node_base": "iqn.2016-06.io.spdk", 00:06:06.886 "max_sessions": 128, 00:06:06.886 "max_connections_per_session": 2, 00:06:06.886 "max_queue_depth": 64, 00:06:06.886 "default_time2wait": 2, 00:06:06.886 "default_time2retain": 20, 00:06:06.886 "first_burst_length": 8192, 00:06:06.886 "immediate_data": true, 00:06:06.886 "allow_duplicated_isid": false, 00:06:06.886 "error_recovery_level": 0, 00:06:06.886 "nop_timeout": 60, 00:06:06.886 "nop_in_interval": 30, 00:06:06.886 "disable_chap": false, 00:06:06.886 "require_chap": false, 00:06:06.886 "mutual_chap": false, 00:06:06.886 "chap_group": 0, 00:06:06.886 "max_large_datain_per_connection": 64, 00:06:06.886 "max_r2t_per_connection": 4, 00:06:06.886 "pdu_pool_size": 36864, 00:06:06.886 "immediate_data_pool_size": 16384, 00:06:06.886 "data_out_pool_size": 2048 00:06:06.886 } 00:06:06.886 } 00:06:06.886 ] 00:06:06.886 } 00:06:06.886 ] 00:06:06.886 } 00:06:06.886 18:56:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:06.886 18:56:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 772268 00:06:06.886 18:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 772268 ']' 00:06:06.886 18:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 772268 00:06:06.886 18:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:06.886 18:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.886 18:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 772268 00:06:06.886 18:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.886 18:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.886 18:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 772268' 00:06:06.886 killing process with pid 772268 00:06:06.886 18:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 772268 00:06:06.886 18:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 772268 00:06:07.453 18:56:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=772537 00:06:07.453 18:56:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:07.453 18:56:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:12.721 18:57:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 772537 00:06:12.721 18:57:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 772537 ']' 00:06:12.721 18:57:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 772537 00:06:12.721 18:57:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:12.721 18:57:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:12.721 18:57:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 772537 00:06:12.721 18:57:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:12.721 18:57:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:12.721 18:57:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 772537' 00:06:12.721 killing process with pid 772537 00:06:12.721 18:57:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 772537 00:06:12.721 18:57:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 772537 00:06:12.979 18:57:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:12.979 18:57:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:12.979 00:06:12.979 real 0m7.159s 00:06:12.979 user 0m6.885s 00:06:12.979 sys 0m0.790s 00:06:12.979 18:57:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.979 18:57:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.979 ************************************ 00:06:12.979 END TEST skip_rpc_with_json 00:06:12.979 ************************************ 00:06:12.979 18:57:05 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:12.979 18:57:05 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.979 18:57:05 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.979 18:57:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.979 ************************************ 00:06:12.979 START TEST skip_rpc_with_delay 00:06:12.979 ************************************ 00:06:12.979 18:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:12.979 18:57:05 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:12.979 18:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:12.979 18:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:12.979 18:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:12.979 18:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.979 18:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:12.979 18:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.979 18:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:12.979 18:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.979 18:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:12.979 18:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:12.979 18:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:12.979 [2024-07-25 18:57:05.345769] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:12.980 [2024-07-25 18:57:05.345859] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:12.980 18:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:12.980 18:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:12.980 18:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:12.980 18:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:12.980 00:06:12.980 real 0m0.068s 00:06:12.980 user 0m0.038s 00:06:12.980 sys 0m0.030s 00:06:12.980 18:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.980 18:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:12.980 ************************************ 00:06:12.980 END TEST skip_rpc_with_delay 00:06:12.980 ************************************ 00:06:12.980 18:57:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:12.980 18:57:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:12.980 18:57:05 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:12.980 18:57:05 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.980 18:57:05 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.980 18:57:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.980 ************************************ 00:06:12.980 START TEST exit_on_failed_rpc_init 00:06:12.980 ************************************ 00:06:12.980 18:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:12.980 18:57:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=773365 00:06:12.980 18:57:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.980 18:57:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 773365 00:06:12.980 18:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 773365 ']' 00:06:12.980 18:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.980 18:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.980 18:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.980 18:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.980 18:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:13.238 [2024-07-25 18:57:05.461147] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:13.238 [2024-07-25 18:57:05.461226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773365 ] 00:06:13.238 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.238 [2024-07-25 18:57:05.527566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.238 [2024-07-25 18:57:05.638775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.496 18:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.496 18:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:13.496 18:57:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:13.496 18:57:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:13.496 18:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:13.496 18:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:13.496 18:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:13.496 18:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.496 18:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:13.497 18:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.497 18:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:13.497 18:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.497 18:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:13.497 18:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:13.497 18:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:13.497 [2024-07-25 18:57:05.966155] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:13.497 [2024-07-25 18:57:05.966236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773374 ] 00:06:13.755 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.755 [2024-07-25 18:57:06.037726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.755 [2024-07-25 18:57:06.158322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.755 [2024-07-25 18:57:06.158428] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:13.755 [2024-07-25 18:57:06.158451] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:13.755 [2024-07-25 18:57:06.158464] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:14.013 18:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:14.013 18:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:14.013 18:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:14.013 18:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:14.013 18:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:14.013 18:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:14.013 18:57:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:14.013 18:57:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 773365 00:06:14.013 18:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 773365 ']' 00:06:14.013 18:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 773365 00:06:14.013 18:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:14.013 18:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.013 18:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 773365 00:06:14.013 18:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.013 18:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.013 18:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 773365' 00:06:14.013 killing process with pid 773365 00:06:14.013 18:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 773365 00:06:14.013 18:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 773365 00:06:14.579 00:06:14.579 real 0m1.379s 00:06:14.579 user 0m1.536s 00:06:14.579 sys 0m0.482s 00:06:14.579 18:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.579 18:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:14.579 ************************************ 00:06:14.579 END TEST exit_on_failed_rpc_init 00:06:14.579 ************************************ 00:06:14.579 18:57:06 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:14.579 00:06:14.579 real 0m14.363s 00:06:14.579 user 0m13.719s 00:06:14.579 sys 0m1.820s 00:06:14.579 18:57:06 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.579 18:57:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.579 ************************************ 00:06:14.579 END TEST skip_rpc 00:06:14.579 ************************************ 00:06:14.579 18:57:06 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:14.579 18:57:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.579 18:57:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.579 18:57:06 -- common/autotest_common.sh@10 -- # set +x 00:06:14.579 ************************************ 00:06:14.579 START TEST rpc_client 00:06:14.579 ************************************ 00:06:14.579 18:57:06 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:14.579 * Looking for test storage... 00:06:14.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:14.579 18:57:06 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:14.579 OK 00:06:14.579 18:57:06 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:14.579 00:06:14.579 real 0m0.064s 00:06:14.579 user 0m0.026s 00:06:14.579 sys 0m0.043s 00:06:14.579 18:57:06 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.579 18:57:06 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:14.579 ************************************ 00:06:14.579 END TEST rpc_client 00:06:14.579 ************************************ 00:06:14.579 18:57:06 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:14.579 18:57:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.579 18:57:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.579 18:57:06 -- common/autotest_common.sh@10 -- # set +x 00:06:14.579 ************************************ 00:06:14.579 START TEST json_config 00:06:14.579 ************************************ 00:06:14.579 18:57:06 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:14.579 18:57:07 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:14.579 18:57:07 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:14.579 18:57:07 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:14.579 18:57:07 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:14.579 18:57:07 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:14.579 18:57:07 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:14.579 18:57:07 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:14.579 18:57:07 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:14.579 18:57:07 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:14.579 18:57:07 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:14.579 18:57:07 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:14.579 18:57:07 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:14.579 18:57:07 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:14.579 18:57:07 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:14.579 18:57:07 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:14.579 18:57:07 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:14.579 18:57:07 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:14.579 18:57:07 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:14.579 18:57:07 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:14.579 18:57:07 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:14.579 18:57:07 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:14.579 18:57:07 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:14.579 18:57:07 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.579 18:57:07 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.579 18:57:07 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.579 18:57:07 json_config -- paths/export.sh@5 -- # export PATH 00:06:14.579 18:57:07 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.579 18:57:07 json_config -- nvmf/common.sh@47 -- # : 0 00:06:14.579 18:57:07 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:14.579 18:57:07 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:14.579 18:57:07 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:14.580 18:57:07 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:14.580 18:57:07 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:14.580 18:57:07 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:14.580 18:57:07 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:14.580 18:57:07 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:14.580 18:57:07 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:14.580 18:57:07 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:14.580 18:57:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:14.580 18:57:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:14.580 18:57:07 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:14.580 18:57:07 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:14.580 18:57:07 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:14.580 18:57:07 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:14.580 18:57:07 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:14.580 18:57:07 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:14.580 18:57:07 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:14.580 18:57:07 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:14.580 18:57:07 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:14.580 18:57:07 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:14.580 18:57:07 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:14.580 18:57:07 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:06:14.580 INFO: JSON configuration test init 00:06:14.580 18:57:07 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:06:14.580 18:57:07 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:06:14.580 18:57:07 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:14.580 18:57:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.580 18:57:07 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:06:14.580 18:57:07 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:14.580 18:57:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.580 18:57:07 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:06:14.580 18:57:07 json_config -- json_config/common.sh@9 -- # local app=target 00:06:14.580 18:57:07 json_config -- json_config/common.sh@10 -- # shift 00:06:14.580 18:57:07 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:14.580 18:57:07 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:14.580 18:57:07 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:14.580 18:57:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:14.580 18:57:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:14.580 18:57:07 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=773622 00:06:14.580 18:57:07 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:14.580 18:57:07 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:14.580 Waiting for target to run... 00:06:14.580 18:57:07 json_config -- json_config/common.sh@25 -- # waitforlisten 773622 /var/tmp/spdk_tgt.sock 00:06:14.580 18:57:07 json_config -- common/autotest_common.sh@831 -- # '[' -z 773622 ']' 00:06:14.580 18:57:07 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:14.580 18:57:07 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.580 18:57:07 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:14.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:14.580 18:57:07 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.580 18:57:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.837 [2024-07-25 18:57:07.082796] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:14.837 [2024-07-25 18:57:07.082875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773622 ] 00:06:14.837 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.095 [2024-07-25 18:57:07.436626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.095 [2024-07-25 18:57:07.526167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.688 18:57:08 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.688 18:57:08 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:15.688 18:57:08 json_config -- json_config/common.sh@26 -- # echo '' 00:06:15.688 00:06:15.688 18:57:08 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:06:15.688 18:57:08 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:06:15.688 18:57:08 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:15.688 18:57:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.688 18:57:08 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:06:15.688 18:57:08 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:06:15.688 18:57:08 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:15.688 18:57:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.688 18:57:08 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:15.688 18:57:08 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:06:15.688 18:57:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:19.012 18:57:11 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:06:19.012 18:57:11 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:19.012 18:57:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:19.012 18:57:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.012 18:57:11 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:19.012 18:57:11 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:19.012 18:57:11 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:19.012 18:57:11 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:19.012 18:57:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:19.012 18:57:11 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:19.270 18:57:11 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:19.270 18:57:11 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:19.270 18:57:11 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:06:19.270 18:57:11 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:06:19.270 18:57:11 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:06:19.270 18:57:11 json_config -- json_config/json_config.sh@51 -- # sort 00:06:19.270 18:57:11 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:06:19.270 18:57:11 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:06:19.270 18:57:11 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:06:19.270 18:57:11 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:06:19.270 18:57:11 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:19.270 18:57:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.270 18:57:11 json_config -- json_config/json_config.sh@59 -- # return 0 00:06:19.270 18:57:11 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:19.270 18:57:11 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:19.270 18:57:11 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:19.270 18:57:11 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:06:19.270 18:57:11 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:06:19.270 18:57:11 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:06:19.270 18:57:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:19.270 18:57:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.270 18:57:11 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:19.270 18:57:11 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:06:19.270 18:57:11 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:06:19.270 18:57:11 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:19.270 18:57:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:19.527 MallocForNvmf0 00:06:19.527 18:57:11 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:19.527 18:57:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:19.785 MallocForNvmf1 00:06:19.785 18:57:12 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:19.785 18:57:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:20.043 [2024-07-25 18:57:12.278524] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:20.043 18:57:12 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:20.043 18:57:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:20.300 18:57:12 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:20.300 18:57:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:20.558 18:57:12 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:20.558 18:57:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:20.815 18:57:13 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:20.815 18:57:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:20.815 [2024-07-25 18:57:13.257736] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:20.815 18:57:13 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:06:20.815 18:57:13 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:20.815 18:57:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.084 18:57:13 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:21.084 18:57:13 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:21.084 18:57:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.084 18:57:13 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:21.084 18:57:13 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:21.084 18:57:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:21.344 MallocBdevForConfigChangeCheck 00:06:21.344 18:57:13 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:21.344 18:57:13 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:21.344 18:57:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.344 18:57:13 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:21.344 18:57:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:21.601 18:57:13 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:21.601 INFO: shutting down applications... 00:06:21.601 18:57:13 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:21.601 18:57:13 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:21.601 18:57:13 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:21.601 18:57:13 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:23.500 Calling clear_iscsi_subsystem 00:06:23.500 Calling clear_nvmf_subsystem 00:06:23.500 Calling clear_nbd_subsystem 00:06:23.500 Calling clear_ublk_subsystem 00:06:23.500 Calling clear_vhost_blk_subsystem 00:06:23.500 Calling clear_vhost_scsi_subsystem 00:06:23.500 Calling clear_bdev_subsystem 00:06:23.500 18:57:15 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:23.500 18:57:15 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:23.500 18:57:15 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:23.500 18:57:15 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:23.500 18:57:15 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:23.500 18:57:15 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:23.500 18:57:15 json_config -- json_config/json_config.sh@349 -- # break 00:06:23.500 18:57:15 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:23.500 18:57:15 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:23.500 18:57:15 json_config -- json_config/common.sh@31 -- # local app=target 00:06:23.500 18:57:15 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:23.500 18:57:15 json_config -- json_config/common.sh@35 -- # [[ -n 773622 ]] 00:06:23.500 18:57:15 json_config -- json_config/common.sh@38 -- # kill -SIGINT 773622 00:06:23.500 18:57:15 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:23.500 18:57:15 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:23.500 18:57:15 json_config -- json_config/common.sh@41 -- # kill -0 773622 00:06:23.500 18:57:15 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:24.068 18:57:16 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:24.068 18:57:16 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:24.068 18:57:16 json_config -- json_config/common.sh@41 -- # kill -0 773622 00:06:24.068 18:57:16 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:24.068 18:57:16 json_config -- json_config/common.sh@43 -- # break 00:06:24.068 18:57:16 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:24.068 18:57:16 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:24.068 SPDK target shutdown done 00:06:24.068 18:57:16 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:24.068 INFO: relaunching applications... 00:06:24.068 18:57:16 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:24.068 18:57:16 json_config -- json_config/common.sh@9 -- # local app=target 00:06:24.068 18:57:16 json_config -- json_config/common.sh@10 -- # shift 00:06:24.068 18:57:16 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:24.068 18:57:16 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:24.068 18:57:16 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:24.068 18:57:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:24.068 18:57:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:24.069 18:57:16 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=775443 00:06:24.069 18:57:16 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:24.069 18:57:16 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:24.069 Waiting for target to run... 00:06:24.069 18:57:16 json_config -- json_config/common.sh@25 -- # waitforlisten 775443 /var/tmp/spdk_tgt.sock 00:06:24.069 18:57:16 json_config -- common/autotest_common.sh@831 -- # '[' -z 775443 ']' 00:06:24.069 18:57:16 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:24.069 18:57:16 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.069 18:57:16 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:24.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:24.069 18:57:16 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.069 18:57:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.069 [2024-07-25 18:57:16.490920] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:24.069 [2024-07-25 18:57:16.491038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775443 ] 00:06:24.069 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.637 [2024-07-25 18:57:17.070493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.895 [2024-07-25 18:57:17.179746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.179 [2024-07-25 18:57:20.231800] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:28.179 [2024-07-25 18:57:20.264283] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:28.746 18:57:20 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.746 18:57:20 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:28.746 18:57:20 json_config -- json_config/common.sh@26 -- # echo '' 00:06:28.746 00:06:28.746 18:57:20 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:28.746 18:57:20 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:28.746 INFO: Checking if target configuration is the same... 00:06:28.746 18:57:20 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:28.746 18:57:20 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:28.746 18:57:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:28.746 + '[' 2 -ne 2 ']' 00:06:28.746 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:28.746 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:28.746 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:28.746 +++ basename /dev/fd/62 00:06:28.746 ++ mktemp /tmp/62.XXX 00:06:28.746 + tmp_file_1=/tmp/62.zY4 00:06:28.746 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:28.746 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:28.746 + tmp_file_2=/tmp/spdk_tgt_config.json.JZ3 00:06:28.746 + ret=0 00:06:28.746 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:29.004 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:29.004 + diff -u /tmp/62.zY4 /tmp/spdk_tgt_config.json.JZ3 00:06:29.004 + echo 'INFO: JSON config files are the same' 00:06:29.004 INFO: JSON config files are the same 00:06:29.004 + rm /tmp/62.zY4 /tmp/spdk_tgt_config.json.JZ3 00:06:29.004 + exit 0 00:06:29.004 18:57:21 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:29.004 18:57:21 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:29.004 INFO: changing configuration and checking if this can be detected... 00:06:29.004 18:57:21 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:29.004 18:57:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:29.263 18:57:21 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:29.263 18:57:21 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:29.263 18:57:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:29.263 + '[' 2 -ne 2 ']' 00:06:29.263 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:29.263 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:29.263 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:29.263 +++ basename /dev/fd/62 00:06:29.263 ++ mktemp /tmp/62.XXX 00:06:29.263 + tmp_file_1=/tmp/62.rvK 00:06:29.263 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:29.263 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:29.263 + tmp_file_2=/tmp/spdk_tgt_config.json.ffj 00:06:29.263 + ret=0 00:06:29.263 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:29.521 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:29.780 + diff -u /tmp/62.rvK /tmp/spdk_tgt_config.json.ffj 00:06:29.780 + ret=1 00:06:29.780 + echo '=== Start of file: /tmp/62.rvK ===' 00:06:29.780 + cat /tmp/62.rvK 00:06:29.780 + echo '=== End of file: /tmp/62.rvK ===' 00:06:29.780 + echo '' 00:06:29.780 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ffj ===' 00:06:29.780 + cat /tmp/spdk_tgt_config.json.ffj 00:06:29.780 + echo '=== End of file: /tmp/spdk_tgt_config.json.ffj ===' 00:06:29.780 + echo '' 00:06:29.780 + rm /tmp/62.rvK /tmp/spdk_tgt_config.json.ffj 00:06:29.780 + exit 1 00:06:29.780 18:57:22 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:29.780 INFO: configuration change detected. 00:06:29.780 18:57:22 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:29.780 18:57:22 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:29.780 18:57:22 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:29.780 18:57:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.780 18:57:22 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:29.780 18:57:22 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:29.780 18:57:22 json_config -- json_config/json_config.sh@321 -- # [[ -n 775443 ]] 00:06:29.780 18:57:22 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:29.780 18:57:22 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:29.780 18:57:22 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:29.780 18:57:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.780 18:57:22 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:06:29.780 18:57:22 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:29.780 18:57:22 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:29.780 18:57:22 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:29.780 18:57:22 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:29.780 18:57:22 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:29.780 18:57:22 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:29.780 18:57:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.780 18:57:22 json_config -- json_config/json_config.sh@327 -- # killprocess 775443 00:06:29.780 18:57:22 json_config -- common/autotest_common.sh@950 -- # '[' -z 775443 ']' 00:06:29.780 18:57:22 json_config -- common/autotest_common.sh@954 -- # kill -0 775443 00:06:29.780 18:57:22 json_config -- common/autotest_common.sh@955 -- # uname 00:06:29.780 18:57:22 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.780 18:57:22 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 775443 00:06:29.780 18:57:22 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:29.780 18:57:22 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:29.780 18:57:22 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 775443' 00:06:29.780 killing process with pid 775443 00:06:29.780 18:57:22 json_config -- common/autotest_common.sh@969 -- # kill 775443 00:06:29.780 18:57:22 json_config -- common/autotest_common.sh@974 -- # wait 775443 00:06:31.678 18:57:23 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:31.678 18:57:23 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:31.678 18:57:23 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:31.678 18:57:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.678 18:57:23 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:31.678 18:57:23 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:31.678 INFO: Success 00:06:31.678 00:06:31.678 real 0m16.738s 00:06:31.678 user 0m18.689s 00:06:31.678 sys 0m2.194s 00:06:31.678 18:57:23 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.678 18:57:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.678 ************************************ 00:06:31.678 END TEST json_config 00:06:31.678 ************************************ 00:06:31.678 18:57:23 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:31.678 18:57:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.678 18:57:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.678 18:57:23 -- common/autotest_common.sh@10 -- # set +x 00:06:31.678 ************************************ 00:06:31.678 START TEST json_config_extra_key 00:06:31.678 ************************************ 00:06:31.678 18:57:23 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:31.678 18:57:23 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:31.678 18:57:23 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:31.678 18:57:23 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:31.678 18:57:23 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:31.678 18:57:23 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:31.678 18:57:23 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:31.678 18:57:23 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:31.678 18:57:23 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:31.678 18:57:23 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:31.678 18:57:23 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:31.678 18:57:23 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:31.678 18:57:23 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:31.678 18:57:23 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:31.678 18:57:23 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:31.678 18:57:23 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:31.678 18:57:23 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:31.678 18:57:23 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:31.678 18:57:23 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:31.678 18:57:23 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:31.678 18:57:23 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.678 18:57:23 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.678 18:57:23 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.678 18:57:23 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.678 18:57:23 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.679 18:57:23 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.679 18:57:23 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:31.679 18:57:23 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.679 18:57:23 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:31.679 18:57:23 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:31.679 18:57:23 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:31.679 18:57:23 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:31.679 18:57:23 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:31.679 18:57:23 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:31.679 18:57:23 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:31.679 18:57:23 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:31.679 18:57:23 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:31.679 18:57:23 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:31.679 18:57:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:31.679 18:57:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:31.679 18:57:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:31.679 18:57:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:31.679 18:57:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:31.679 18:57:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:31.679 18:57:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:31.679 18:57:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:31.679 18:57:23 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:31.679 18:57:23 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:31.679 INFO: launching applications... 00:06:31.679 18:57:23 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:31.679 18:57:23 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:31.679 18:57:23 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:31.679 18:57:23 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:31.679 18:57:23 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:31.679 18:57:23 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:31.679 18:57:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:31.679 18:57:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:31.679 18:57:23 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=776362 00:06:31.679 18:57:23 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:31.679 18:57:23 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:31.679 Waiting for target to run... 00:06:31.679 18:57:23 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 776362 /var/tmp/spdk_tgt.sock 00:06:31.679 18:57:23 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 776362 ']' 00:06:31.679 18:57:23 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:31.679 18:57:23 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.679 18:57:23 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:31.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:31.679 18:57:23 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.679 18:57:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:31.679 [2024-07-25 18:57:23.866323] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:31.679 [2024-07-25 18:57:23.866431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid776362 ] 00:06:31.679 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.937 [2024-07-25 18:57:24.230668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.937 [2024-07-25 18:57:24.320389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.503 18:57:24 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.503 18:57:24 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:32.503 18:57:24 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:32.503 00:06:32.503 18:57:24 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:32.503 INFO: shutting down applications... 00:06:32.503 18:57:24 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:32.503 18:57:24 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:32.503 18:57:24 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:32.503 18:57:24 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 776362 ]] 00:06:32.503 18:57:24 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 776362 00:06:32.503 18:57:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:32.503 18:57:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:32.503 18:57:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 776362 00:06:32.503 18:57:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:33.067 18:57:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:33.067 18:57:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:33.067 18:57:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 776362 00:06:33.067 18:57:25 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:33.067 18:57:25 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:33.067 18:57:25 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:33.067 18:57:25 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:33.067 SPDK target shutdown done 00:06:33.067 18:57:25 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:33.067 Success 00:06:33.067 00:06:33.067 real 0m1.563s 00:06:33.067 user 0m1.560s 00:06:33.067 sys 0m0.461s 00:06:33.067 18:57:25 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.067 18:57:25 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:33.067 ************************************ 00:06:33.067 END TEST json_config_extra_key 00:06:33.067 ************************************ 00:06:33.067 18:57:25 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:33.067 18:57:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.067 18:57:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.067 18:57:25 -- common/autotest_common.sh@10 -- # set +x 00:06:33.067 ************************************ 00:06:33.067 START TEST alias_rpc 00:06:33.067 ************************************ 00:06:33.067 18:57:25 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:33.067 * Looking for test storage... 00:06:33.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:33.067 18:57:25 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:33.067 18:57:25 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=776673 00:06:33.067 18:57:25 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:33.067 18:57:25 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 776673 00:06:33.067 18:57:25 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 776673 ']' 00:06:33.067 18:57:25 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.067 18:57:25 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.067 18:57:25 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.067 18:57:25 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.067 18:57:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.068 [2024-07-25 18:57:25.479892] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:33.068 [2024-07-25 18:57:25.479967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid776673 ] 00:06:33.068 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.326 [2024-07-25 18:57:25.545962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.326 [2024-07-25 18:57:25.652156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.584 18:57:25 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.584 18:57:25 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:33.584 18:57:25 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:33.842 18:57:26 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 776673 00:06:33.842 18:57:26 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 776673 ']' 00:06:33.842 18:57:26 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 776673 00:06:33.842 18:57:26 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:33.842 18:57:26 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:33.842 18:57:26 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 776673 00:06:33.842 18:57:26 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:33.842 18:57:26 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:33.842 18:57:26 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 776673' 00:06:33.842 killing process with pid 776673 00:06:33.842 18:57:26 alias_rpc -- common/autotest_common.sh@969 -- # kill 776673 00:06:33.842 18:57:26 alias_rpc -- common/autotest_common.sh@974 -- # wait 776673 00:06:34.409 00:06:34.409 real 0m1.314s 00:06:34.409 user 0m1.364s 00:06:34.409 sys 0m0.445s 00:06:34.409 18:57:26 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.409 18:57:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.409 ************************************ 00:06:34.409 END TEST alias_rpc 00:06:34.409 ************************************ 00:06:34.409 18:57:26 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:34.409 18:57:26 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:34.409 18:57:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.409 18:57:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.409 18:57:26 -- common/autotest_common.sh@10 -- # set +x 00:06:34.409 ************************************ 00:06:34.409 START TEST spdkcli_tcp 00:06:34.409 ************************************ 00:06:34.409 18:57:26 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:34.409 * Looking for test storage... 00:06:34.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:34.409 18:57:26 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:34.409 18:57:26 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:34.409 18:57:26 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:34.409 18:57:26 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:34.409 18:57:26 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:34.409 18:57:26 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:34.409 18:57:26 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:34.409 18:57:26 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:34.409 18:57:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:34.409 18:57:26 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=776860 00:06:34.409 18:57:26 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:34.409 18:57:26 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 776860 00:06:34.409 18:57:26 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 776860 ']' 00:06:34.409 18:57:26 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.409 18:57:26 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.409 18:57:26 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.409 18:57:26 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.409 18:57:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:34.409 [2024-07-25 18:57:26.846147] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:34.409 [2024-07-25 18:57:26.846231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid776860 ] 00:06:34.409 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.668 [2024-07-25 18:57:26.911550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.668 [2024-07-25 18:57:27.020722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.668 [2024-07-25 18:57:27.020728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.603 18:57:27 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.603 18:57:27 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:35.603 18:57:27 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=776997 00:06:35.603 18:57:27 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:35.603 18:57:27 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:35.603 [ 00:06:35.603 "bdev_malloc_delete", 00:06:35.603 "bdev_malloc_create", 00:06:35.603 "bdev_null_resize", 00:06:35.603 "bdev_null_delete", 00:06:35.603 "bdev_null_create", 00:06:35.603 "bdev_nvme_cuse_unregister", 00:06:35.603 "bdev_nvme_cuse_register", 00:06:35.603 "bdev_opal_new_user", 00:06:35.603 "bdev_opal_set_lock_state", 00:06:35.603 "bdev_opal_delete", 00:06:35.603 "bdev_opal_get_info", 00:06:35.603 "bdev_opal_create", 00:06:35.603 "bdev_nvme_opal_revert", 00:06:35.603 "bdev_nvme_opal_init", 00:06:35.603 "bdev_nvme_send_cmd", 00:06:35.603 "bdev_nvme_get_path_iostat", 00:06:35.603 "bdev_nvme_get_mdns_discovery_info", 00:06:35.603 "bdev_nvme_stop_mdns_discovery", 00:06:35.603 "bdev_nvme_start_mdns_discovery", 00:06:35.603 "bdev_nvme_set_multipath_policy", 00:06:35.603 "bdev_nvme_set_preferred_path", 00:06:35.603 "bdev_nvme_get_io_paths", 00:06:35.603 "bdev_nvme_remove_error_injection", 00:06:35.603 "bdev_nvme_add_error_injection", 00:06:35.603 "bdev_nvme_get_discovery_info", 00:06:35.603 "bdev_nvme_stop_discovery", 00:06:35.603 "bdev_nvme_start_discovery", 00:06:35.603 "bdev_nvme_get_controller_health_info", 00:06:35.603 "bdev_nvme_disable_controller", 00:06:35.603 "bdev_nvme_enable_controller", 00:06:35.603 "bdev_nvme_reset_controller", 00:06:35.603 "bdev_nvme_get_transport_statistics", 00:06:35.603 "bdev_nvme_apply_firmware", 00:06:35.603 "bdev_nvme_detach_controller", 00:06:35.603 "bdev_nvme_get_controllers", 00:06:35.603 "bdev_nvme_attach_controller", 00:06:35.603 "bdev_nvme_set_hotplug", 00:06:35.603 "bdev_nvme_set_options", 00:06:35.603 "bdev_passthru_delete", 00:06:35.603 "bdev_passthru_create", 00:06:35.603 "bdev_lvol_set_parent_bdev", 00:06:35.603 "bdev_lvol_set_parent", 00:06:35.603 "bdev_lvol_check_shallow_copy", 00:06:35.603 "bdev_lvol_start_shallow_copy", 00:06:35.603 "bdev_lvol_grow_lvstore", 00:06:35.603 "bdev_lvol_get_lvols", 00:06:35.603 "bdev_lvol_get_lvstores", 00:06:35.603 "bdev_lvol_delete", 00:06:35.603 "bdev_lvol_set_read_only", 00:06:35.603 "bdev_lvol_resize", 00:06:35.603 "bdev_lvol_decouple_parent", 00:06:35.603 "bdev_lvol_inflate", 00:06:35.603 "bdev_lvol_rename", 00:06:35.603 "bdev_lvol_clone_bdev", 00:06:35.603 "bdev_lvol_clone", 00:06:35.603 "bdev_lvol_snapshot", 00:06:35.603 "bdev_lvol_create", 00:06:35.603 "bdev_lvol_delete_lvstore", 00:06:35.603 "bdev_lvol_rename_lvstore", 00:06:35.603 "bdev_lvol_create_lvstore", 00:06:35.603 "bdev_raid_set_options", 00:06:35.603 "bdev_raid_remove_base_bdev", 00:06:35.603 "bdev_raid_add_base_bdev", 00:06:35.603 "bdev_raid_delete", 00:06:35.603 "bdev_raid_create", 00:06:35.603 "bdev_raid_get_bdevs", 00:06:35.603 "bdev_error_inject_error", 00:06:35.603 "bdev_error_delete", 00:06:35.603 "bdev_error_create", 00:06:35.603 "bdev_split_delete", 00:06:35.603 "bdev_split_create", 00:06:35.603 "bdev_delay_delete", 00:06:35.603 "bdev_delay_create", 00:06:35.603 "bdev_delay_update_latency", 00:06:35.603 "bdev_zone_block_delete", 00:06:35.603 "bdev_zone_block_create", 00:06:35.603 "blobfs_create", 00:06:35.603 "blobfs_detect", 00:06:35.603 "blobfs_set_cache_size", 00:06:35.603 "bdev_aio_delete", 00:06:35.603 "bdev_aio_rescan", 00:06:35.603 "bdev_aio_create", 00:06:35.603 "bdev_ftl_set_property", 00:06:35.603 "bdev_ftl_get_properties", 00:06:35.603 "bdev_ftl_get_stats", 00:06:35.603 "bdev_ftl_unmap", 00:06:35.603 "bdev_ftl_unload", 00:06:35.603 "bdev_ftl_delete", 00:06:35.603 "bdev_ftl_load", 00:06:35.603 "bdev_ftl_create", 00:06:35.603 "bdev_virtio_attach_controller", 00:06:35.603 "bdev_virtio_scsi_get_devices", 00:06:35.603 "bdev_virtio_detach_controller", 00:06:35.603 "bdev_virtio_blk_set_hotplug", 00:06:35.603 "bdev_iscsi_delete", 00:06:35.603 "bdev_iscsi_create", 00:06:35.603 "bdev_iscsi_set_options", 00:06:35.603 "accel_error_inject_error", 00:06:35.603 "ioat_scan_accel_module", 00:06:35.603 "dsa_scan_accel_module", 00:06:35.603 "iaa_scan_accel_module", 00:06:35.603 "vfu_virtio_create_scsi_endpoint", 00:06:35.603 "vfu_virtio_scsi_remove_target", 00:06:35.603 "vfu_virtio_scsi_add_target", 00:06:35.603 "vfu_virtio_create_blk_endpoint", 00:06:35.603 "vfu_virtio_delete_endpoint", 00:06:35.603 "keyring_file_remove_key", 00:06:35.603 "keyring_file_add_key", 00:06:35.603 "keyring_linux_set_options", 00:06:35.603 "iscsi_get_histogram", 00:06:35.603 "iscsi_enable_histogram", 00:06:35.603 "iscsi_set_options", 00:06:35.603 "iscsi_get_auth_groups", 00:06:35.603 "iscsi_auth_group_remove_secret", 00:06:35.603 "iscsi_auth_group_add_secret", 00:06:35.603 "iscsi_delete_auth_group", 00:06:35.603 "iscsi_create_auth_group", 00:06:35.603 "iscsi_set_discovery_auth", 00:06:35.603 "iscsi_get_options", 00:06:35.603 "iscsi_target_node_request_logout", 00:06:35.603 "iscsi_target_node_set_redirect", 00:06:35.603 "iscsi_target_node_set_auth", 00:06:35.603 "iscsi_target_node_add_lun", 00:06:35.603 "iscsi_get_stats", 00:06:35.603 "iscsi_get_connections", 00:06:35.603 "iscsi_portal_group_set_auth", 00:06:35.603 "iscsi_start_portal_group", 00:06:35.603 "iscsi_delete_portal_group", 00:06:35.603 "iscsi_create_portal_group", 00:06:35.603 "iscsi_get_portal_groups", 00:06:35.603 "iscsi_delete_target_node", 00:06:35.603 "iscsi_target_node_remove_pg_ig_maps", 00:06:35.603 "iscsi_target_node_add_pg_ig_maps", 00:06:35.603 "iscsi_create_target_node", 00:06:35.603 "iscsi_get_target_nodes", 00:06:35.603 "iscsi_delete_initiator_group", 00:06:35.603 "iscsi_initiator_group_remove_initiators", 00:06:35.603 "iscsi_initiator_group_add_initiators", 00:06:35.603 "iscsi_create_initiator_group", 00:06:35.603 "iscsi_get_initiator_groups", 00:06:35.603 "nvmf_set_crdt", 00:06:35.603 "nvmf_set_config", 00:06:35.603 "nvmf_set_max_subsystems", 00:06:35.603 "nvmf_stop_mdns_prr", 00:06:35.603 "nvmf_publish_mdns_prr", 00:06:35.603 "nvmf_subsystem_get_listeners", 00:06:35.603 "nvmf_subsystem_get_qpairs", 00:06:35.603 "nvmf_subsystem_get_controllers", 00:06:35.603 "nvmf_get_stats", 00:06:35.603 "nvmf_get_transports", 00:06:35.603 "nvmf_create_transport", 00:06:35.603 "nvmf_get_targets", 00:06:35.603 "nvmf_delete_target", 00:06:35.603 "nvmf_create_target", 00:06:35.603 "nvmf_subsystem_allow_any_host", 00:06:35.603 "nvmf_subsystem_remove_host", 00:06:35.603 "nvmf_subsystem_add_host", 00:06:35.603 "nvmf_ns_remove_host", 00:06:35.603 "nvmf_ns_add_host", 00:06:35.603 "nvmf_subsystem_remove_ns", 00:06:35.603 "nvmf_subsystem_add_ns", 00:06:35.603 "nvmf_subsystem_listener_set_ana_state", 00:06:35.603 "nvmf_discovery_get_referrals", 00:06:35.603 "nvmf_discovery_remove_referral", 00:06:35.603 "nvmf_discovery_add_referral", 00:06:35.603 "nvmf_subsystem_remove_listener", 00:06:35.603 "nvmf_subsystem_add_listener", 00:06:35.603 "nvmf_delete_subsystem", 00:06:35.603 "nvmf_create_subsystem", 00:06:35.603 "nvmf_get_subsystems", 00:06:35.603 "env_dpdk_get_mem_stats", 00:06:35.603 "nbd_get_disks", 00:06:35.603 "nbd_stop_disk", 00:06:35.603 "nbd_start_disk", 00:06:35.603 "ublk_recover_disk", 00:06:35.603 "ublk_get_disks", 00:06:35.603 "ublk_stop_disk", 00:06:35.603 "ublk_start_disk", 00:06:35.603 "ublk_destroy_target", 00:06:35.603 "ublk_create_target", 00:06:35.603 "virtio_blk_create_transport", 00:06:35.603 "virtio_blk_get_transports", 00:06:35.603 "vhost_controller_set_coalescing", 00:06:35.604 "vhost_get_controllers", 00:06:35.604 "vhost_delete_controller", 00:06:35.604 "vhost_create_blk_controller", 00:06:35.604 "vhost_scsi_controller_remove_target", 00:06:35.604 "vhost_scsi_controller_add_target", 00:06:35.604 "vhost_start_scsi_controller", 00:06:35.604 "vhost_create_scsi_controller", 00:06:35.604 "thread_set_cpumask", 00:06:35.604 "framework_get_governor", 00:06:35.604 "framework_get_scheduler", 00:06:35.604 "framework_set_scheduler", 00:06:35.604 "framework_get_reactors", 00:06:35.604 "thread_get_io_channels", 00:06:35.604 "thread_get_pollers", 00:06:35.604 "thread_get_stats", 00:06:35.604 "framework_monitor_context_switch", 00:06:35.604 "spdk_kill_instance", 00:06:35.604 "log_enable_timestamps", 00:06:35.604 "log_get_flags", 00:06:35.604 "log_clear_flag", 00:06:35.604 "log_set_flag", 00:06:35.604 "log_get_level", 00:06:35.604 "log_set_level", 00:06:35.604 "log_get_print_level", 00:06:35.604 "log_set_print_level", 00:06:35.604 "framework_enable_cpumask_locks", 00:06:35.604 "framework_disable_cpumask_locks", 00:06:35.604 "framework_wait_init", 00:06:35.604 "framework_start_init", 00:06:35.604 "scsi_get_devices", 00:06:35.604 "bdev_get_histogram", 00:06:35.604 "bdev_enable_histogram", 00:06:35.604 "bdev_set_qos_limit", 00:06:35.604 "bdev_set_qd_sampling_period", 00:06:35.604 "bdev_get_bdevs", 00:06:35.604 "bdev_reset_iostat", 00:06:35.604 "bdev_get_iostat", 00:06:35.604 "bdev_examine", 00:06:35.604 "bdev_wait_for_examine", 00:06:35.604 "bdev_set_options", 00:06:35.604 "notify_get_notifications", 00:06:35.604 "notify_get_types", 00:06:35.604 "accel_get_stats", 00:06:35.604 "accel_set_options", 00:06:35.604 "accel_set_driver", 00:06:35.604 "accel_crypto_key_destroy", 00:06:35.604 "accel_crypto_keys_get", 00:06:35.604 "accel_crypto_key_create", 00:06:35.604 "accel_assign_opc", 00:06:35.604 "accel_get_module_info", 00:06:35.604 "accel_get_opc_assignments", 00:06:35.604 "vmd_rescan", 00:06:35.604 "vmd_remove_device", 00:06:35.604 "vmd_enable", 00:06:35.604 "sock_get_default_impl", 00:06:35.604 "sock_set_default_impl", 00:06:35.604 "sock_impl_set_options", 00:06:35.604 "sock_impl_get_options", 00:06:35.604 "iobuf_get_stats", 00:06:35.604 "iobuf_set_options", 00:06:35.604 "keyring_get_keys", 00:06:35.604 "framework_get_pci_devices", 00:06:35.604 "framework_get_config", 00:06:35.604 "framework_get_subsystems", 00:06:35.604 "vfu_tgt_set_base_path", 00:06:35.604 "trace_get_info", 00:06:35.604 "trace_get_tpoint_group_mask", 00:06:35.604 "trace_disable_tpoint_group", 00:06:35.604 "trace_enable_tpoint_group", 00:06:35.604 "trace_clear_tpoint_mask", 00:06:35.604 "trace_set_tpoint_mask", 00:06:35.604 "spdk_get_version", 00:06:35.604 "rpc_get_methods" 00:06:35.604 ] 00:06:35.604 18:57:28 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:35.604 18:57:28 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:35.604 18:57:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:35.604 18:57:28 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:35.604 18:57:28 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 776860 00:06:35.604 18:57:28 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 776860 ']' 00:06:35.604 18:57:28 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 776860 00:06:35.604 18:57:28 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:35.604 18:57:28 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.604 18:57:28 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 776860 00:06:35.604 18:57:28 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.604 18:57:28 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.604 18:57:28 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 776860' 00:06:35.604 killing process with pid 776860 00:06:35.604 18:57:28 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 776860 00:06:35.604 18:57:28 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 776860 00:06:36.208 00:06:36.208 real 0m1.789s 00:06:36.208 user 0m3.398s 00:06:36.208 sys 0m0.502s 00:06:36.208 18:57:28 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.208 18:57:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:36.208 ************************************ 00:06:36.208 END TEST spdkcli_tcp 00:06:36.208 ************************************ 00:06:36.208 18:57:28 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:36.208 18:57:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.208 18:57:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.208 18:57:28 -- common/autotest_common.sh@10 -- # set +x 00:06:36.208 ************************************ 00:06:36.208 START TEST dpdk_mem_utility 00:06:36.208 ************************************ 00:06:36.208 18:57:28 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:36.208 * Looking for test storage... 00:06:36.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:36.208 18:57:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:36.208 18:57:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=777195 00:06:36.208 18:57:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:36.208 18:57:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 777195 00:06:36.208 18:57:28 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 777195 ']' 00:06:36.208 18:57:28 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.208 18:57:28 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.208 18:57:28 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.208 18:57:28 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.208 18:57:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:36.466 [2024-07-25 18:57:28.681663] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:36.466 [2024-07-25 18:57:28.681739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777195 ] 00:06:36.466 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.466 [2024-07-25 18:57:28.747410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.466 [2024-07-25 18:57:28.853493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.725 18:57:29 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.725 18:57:29 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:36.725 18:57:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:36.725 18:57:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:36.725 18:57:29 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.725 18:57:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:36.725 { 00:06:36.725 "filename": "/tmp/spdk_mem_dump.txt" 00:06:36.725 } 00:06:36.725 18:57:29 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.725 18:57:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:36.725 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:36.725 1 heaps totaling size 814.000000 MiB 00:06:36.725 size: 814.000000 MiB heap id: 0 00:06:36.725 end heaps---------- 00:06:36.725 8 mempools totaling size 598.116089 MiB 00:06:36.725 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:36.725 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:36.725 size: 84.521057 MiB name: bdev_io_777195 00:06:36.725 size: 51.011292 MiB name: evtpool_777195 00:06:36.725 size: 50.003479 MiB name: msgpool_777195 00:06:36.725 size: 21.763794 MiB name: PDU_Pool 00:06:36.725 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:36.725 size: 0.026123 MiB name: Session_Pool 00:06:36.725 end mempools------- 00:06:36.725 6 memzones totaling size 4.142822 MiB 00:06:36.725 size: 1.000366 MiB name: RG_ring_0_777195 00:06:36.725 size: 1.000366 MiB name: RG_ring_1_777195 00:06:36.725 size: 1.000366 MiB name: RG_ring_4_777195 00:06:36.725 size: 1.000366 MiB name: RG_ring_5_777195 00:06:36.725 size: 0.125366 MiB name: RG_ring_2_777195 00:06:36.725 size: 0.015991 MiB name: RG_ring_3_777195 00:06:36.725 end memzones------- 00:06:36.983 18:57:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:36.983 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:36.983 list of free elements. size: 12.519348 MiB 00:06:36.983 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:36.983 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:36.983 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:36.983 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:36.983 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:36.983 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:36.983 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:36.983 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:36.983 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:36.984 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:36.984 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:36.984 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:36.984 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:36.984 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:36.984 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:36.984 list of standard malloc elements. size: 199.218079 MiB 00:06:36.984 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:36.984 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:36.984 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:36.984 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:36.984 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:36.984 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:36.984 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:36.984 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:36.984 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:36.984 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:36.984 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:36.984 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:36.984 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:36.984 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:36.984 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:36.984 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:36.984 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:36.984 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:36.984 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:36.984 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:36.984 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:36.984 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:36.984 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:36.984 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:36.984 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:36.984 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:36.984 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:36.984 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:36.984 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:36.984 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:36.984 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:36.984 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:36.984 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:36.984 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:36.984 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:36.984 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:36.984 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:36.984 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:36.984 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:36.984 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:36.984 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:36.984 list of memzone associated elements. size: 602.262573 MiB 00:06:36.984 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:36.984 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:36.984 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:36.984 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:36.984 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:36.984 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_777195_0 00:06:36.984 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:36.984 associated memzone info: size: 48.002930 MiB name: MP_evtpool_777195_0 00:06:36.984 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:36.984 associated memzone info: size: 48.002930 MiB name: MP_msgpool_777195_0 00:06:36.984 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:36.984 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:36.984 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:36.984 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:36.984 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:36.984 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_777195 00:06:36.984 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:36.984 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_777195 00:06:36.984 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:36.984 associated memzone info: size: 1.007996 MiB name: MP_evtpool_777195 00:06:36.984 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:36.984 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:36.984 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:36.984 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:36.984 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:36.984 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:36.984 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:36.984 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:36.984 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:36.984 associated memzone info: size: 1.000366 MiB name: RG_ring_0_777195 00:06:36.984 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:36.984 associated memzone info: size: 1.000366 MiB name: RG_ring_1_777195 00:06:36.984 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:36.984 associated memzone info: size: 1.000366 MiB name: RG_ring_4_777195 00:06:36.984 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:36.984 associated memzone info: size: 1.000366 MiB name: RG_ring_5_777195 00:06:36.984 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:36.984 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_777195 00:06:36.984 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:36.984 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:36.984 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:36.984 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:36.984 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:36.984 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:36.984 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:36.984 associated memzone info: size: 0.125366 MiB name: RG_ring_2_777195 00:06:36.984 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:36.984 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:36.984 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:36.984 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:36.984 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:36.984 associated memzone info: size: 0.015991 MiB name: RG_ring_3_777195 00:06:36.984 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:36.984 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:36.984 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:36.984 associated memzone info: size: 0.000183 MiB name: MP_msgpool_777195 00:06:36.984 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:36.984 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_777195 00:06:36.984 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:36.984 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:36.984 18:57:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:36.984 18:57:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 777195 00:06:36.984 18:57:29 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 777195 ']' 00:06:36.984 18:57:29 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 777195 00:06:36.984 18:57:29 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:36.984 18:57:29 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.984 18:57:29 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 777195 00:06:36.984 18:57:29 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.984 18:57:29 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.984 18:57:29 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 777195' 00:06:36.984 killing process with pid 777195 00:06:36.984 18:57:29 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 777195 00:06:36.984 18:57:29 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 777195 00:06:37.551 00:06:37.551 real 0m1.155s 00:06:37.551 user 0m1.106s 00:06:37.551 sys 0m0.422s 00:06:37.551 18:57:29 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.551 18:57:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:37.551 ************************************ 00:06:37.551 END TEST dpdk_mem_utility 00:06:37.551 ************************************ 00:06:37.551 18:57:29 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:37.551 18:57:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.552 18:57:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.552 18:57:29 -- common/autotest_common.sh@10 -- # set +x 00:06:37.552 ************************************ 00:06:37.552 START TEST event 00:06:37.552 ************************************ 00:06:37.552 18:57:29 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:37.552 * Looking for test storage... 00:06:37.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:37.552 18:57:29 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:37.552 18:57:29 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:37.552 18:57:29 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:37.552 18:57:29 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:37.552 18:57:29 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.552 18:57:29 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.552 ************************************ 00:06:37.552 START TEST event_perf 00:06:37.552 ************************************ 00:06:37.552 18:57:29 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:37.552 Running I/O for 1 seconds...[2024-07-25 18:57:29.867247] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:37.552 [2024-07-25 18:57:29.867309] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777385 ] 00:06:37.552 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.552 [2024-07-25 18:57:29.942192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:37.811 [2024-07-25 18:57:30.077131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.811 [2024-07-25 18:57:30.077174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.811 [2024-07-25 18:57:30.077278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.811 [2024-07-25 18:57:30.077274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.745 Running I/O for 1 seconds... 00:06:38.745 lcore 0: 233032 00:06:38.745 lcore 1: 233033 00:06:38.745 lcore 2: 233033 00:06:38.745 lcore 3: 233033 00:06:38.745 done. 00:06:38.745 00:06:38.745 real 0m1.350s 00:06:38.745 user 0m4.239s 00:06:38.745 sys 0m0.099s 00:06:38.745 18:57:31 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.745 18:57:31 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:38.745 ************************************ 00:06:38.745 END TEST event_perf 00:06:38.745 ************************************ 00:06:39.003 18:57:31 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:39.003 18:57:31 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:39.003 18:57:31 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.003 18:57:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.003 ************************************ 00:06:39.003 START TEST event_reactor 00:06:39.003 ************************************ 00:06:39.004 18:57:31 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:39.004 [2024-07-25 18:57:31.268494] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:39.004 [2024-07-25 18:57:31.268559] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777540 ] 00:06:39.004 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.004 [2024-07-25 18:57:31.343049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.004 [2024-07-25 18:57:31.461748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.377 test_start 00:06:40.377 oneshot 00:06:40.377 tick 100 00:06:40.377 tick 100 00:06:40.377 tick 250 00:06:40.377 tick 100 00:06:40.377 tick 100 00:06:40.377 tick 100 00:06:40.377 tick 250 00:06:40.377 tick 500 00:06:40.377 tick 100 00:06:40.377 tick 100 00:06:40.377 tick 250 00:06:40.377 tick 100 00:06:40.377 tick 100 00:06:40.377 test_end 00:06:40.377 00:06:40.377 real 0m1.332s 00:06:40.377 user 0m1.229s 00:06:40.377 sys 0m0.098s 00:06:40.377 18:57:32 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.377 18:57:32 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:40.377 ************************************ 00:06:40.377 END TEST event_reactor 00:06:40.377 ************************************ 00:06:40.377 18:57:32 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:40.377 18:57:32 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:40.377 18:57:32 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.377 18:57:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:40.377 ************************************ 00:06:40.377 START TEST event_reactor_perf 00:06:40.377 ************************************ 00:06:40.377 18:57:32 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:40.377 [2024-07-25 18:57:32.650674] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:40.377 [2024-07-25 18:57:32.650742] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777700 ] 00:06:40.377 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.377 [2024-07-25 18:57:32.721996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.377 [2024-07-25 18:57:32.842543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.751 test_start 00:06:41.751 test_end 00:06:41.751 Performance: 360452 events per second 00:06:41.751 00:06:41.751 real 0m1.330s 00:06:41.751 user 0m1.238s 00:06:41.751 sys 0m0.087s 00:06:41.751 18:57:33 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.751 18:57:33 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:41.751 ************************************ 00:06:41.751 END TEST event_reactor_perf 00:06:41.751 ************************************ 00:06:41.751 18:57:33 event -- event/event.sh@49 -- # uname -s 00:06:41.751 18:57:33 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:41.751 18:57:33 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:41.751 18:57:33 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.751 18:57:33 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.751 18:57:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.751 ************************************ 00:06:41.751 START TEST event_scheduler 00:06:41.751 ************************************ 00:06:41.751 18:57:34 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:41.751 * Looking for test storage... 00:06:41.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:41.751 18:57:34 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:41.751 18:57:34 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=777937 00:06:41.751 18:57:34 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:41.751 18:57:34 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:41.751 18:57:34 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 777937 00:06:41.751 18:57:34 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 777937 ']' 00:06:41.751 18:57:34 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.751 18:57:34 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.751 18:57:34 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.751 18:57:34 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.751 18:57:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:41.751 [2024-07-25 18:57:34.105032] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:41.751 [2024-07-25 18:57:34.105159] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777937 ] 00:06:41.751 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.751 [2024-07-25 18:57:34.176370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.009 [2024-07-25 18:57:34.294211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.009 [2024-07-25 18:57:34.294244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.009 [2024-07-25 18:57:34.294304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.009 [2024-07-25 18:57:34.294307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.009 18:57:34 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.009 18:57:34 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:42.009 18:57:34 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:42.009 18:57:34 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.009 18:57:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:42.009 [2024-07-25 18:57:34.335140] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:42.009 [2024-07-25 18:57:34.335178] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:42.009 [2024-07-25 18:57:34.335210] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:42.009 [2024-07-25 18:57:34.335221] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:42.009 [2024-07-25 18:57:34.335231] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:42.009 18:57:34 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.009 18:57:34 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:42.009 18:57:34 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.009 18:57:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:42.009 [2024-07-25 18:57:34.432932] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:42.009 18:57:34 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.009 18:57:34 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:42.009 18:57:34 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.009 18:57:34 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.009 18:57:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:42.010 ************************************ 00:06:42.010 START TEST scheduler_create_thread 00:06:42.010 ************************************ 00:06:42.010 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:42.010 18:57:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:42.010 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.010 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.010 2 00:06:42.010 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.010 18:57:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:42.010 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.010 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.268 3 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.268 4 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.268 5 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.268 6 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.268 7 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.268 8 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.268 9 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.268 10 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.268 18:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.202 18:57:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.202 00:06:43.202 real 0m1.173s 00:06:43.202 user 0m0.009s 00:06:43.202 sys 0m0.005s 00:06:43.202 18:57:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.202 18:57:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.202 ************************************ 00:06:43.202 END TEST scheduler_create_thread 00:06:43.202 ************************************ 00:06:43.202 18:57:35 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:43.202 18:57:35 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 777937 00:06:43.202 18:57:35 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 777937 ']' 00:06:43.202 18:57:35 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 777937 00:06:43.202 18:57:35 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:43.202 18:57:35 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.202 18:57:35 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 777937 00:06:43.460 18:57:35 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:43.460 18:57:35 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:43.460 18:57:35 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 777937' 00:06:43.460 killing process with pid 777937 00:06:43.460 18:57:35 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 777937 00:06:43.460 18:57:35 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 777937 00:06:43.717 [2024-07-25 18:57:36.114964] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:43.976 00:06:43.976 real 0m2.365s 00:06:43.976 user 0m2.677s 00:06:43.976 sys 0m0.357s 00:06:43.976 18:57:36 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.976 18:57:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:43.976 ************************************ 00:06:43.976 END TEST event_scheduler 00:06:43.976 ************************************ 00:06:43.976 18:57:36 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:43.976 18:57:36 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:43.976 18:57:36 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.976 18:57:36 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.976 18:57:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.976 ************************************ 00:06:43.976 START TEST app_repeat 00:06:43.976 ************************************ 00:06:43.976 18:57:36 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:43.976 18:57:36 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.976 18:57:36 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.976 18:57:36 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:43.976 18:57:36 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.976 18:57:36 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:43.976 18:57:36 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:43.976 18:57:36 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:43.976 18:57:36 event.app_repeat -- event/event.sh@19 -- # repeat_pid=778322 00:06:43.976 18:57:36 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:43.976 18:57:36 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:43.976 18:57:36 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 778322' 00:06:43.976 Process app_repeat pid: 778322 00:06:43.976 18:57:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:43.976 18:57:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:43.976 spdk_app_start Round 0 00:06:43.976 18:57:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 778322 /var/tmp/spdk-nbd.sock 00:06:43.976 18:57:36 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 778322 ']' 00:06:43.976 18:57:36 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:43.976 18:57:36 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.976 18:57:36 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:43.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:43.976 18:57:36 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.976 18:57:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:44.235 [2024-07-25 18:57:36.457964] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:44.235 [2024-07-25 18:57:36.458028] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778322 ] 00:06:44.235 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.235 [2024-07-25 18:57:36.529479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:44.235 [2024-07-25 18:57:36.648296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.235 [2024-07-25 18:57:36.648303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.492 18:57:36 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.492 18:57:36 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:44.492 18:57:36 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.750 Malloc0 00:06:44.750 18:57:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.008 Malloc1 00:06:45.008 18:57:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.008 18:57:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.008 18:57:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.008 18:57:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:45.008 18:57:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.008 18:57:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:45.008 18:57:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.008 18:57:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.008 18:57:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.008 18:57:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:45.008 18:57:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.008 18:57:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:45.008 18:57:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:45.008 18:57:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:45.008 18:57:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.008 18:57:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:45.265 /dev/nbd0 00:06:45.265 18:57:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:45.265 18:57:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:45.265 18:57:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:45.265 18:57:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:45.265 18:57:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:45.265 18:57:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:45.265 18:57:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:45.265 18:57:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:45.265 18:57:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:45.265 18:57:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:45.265 18:57:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.265 1+0 records in 00:06:45.265 1+0 records out 00:06:45.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000155379 s, 26.4 MB/s 00:06:45.265 18:57:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:45.265 18:57:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:45.265 18:57:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:45.265 18:57:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:45.265 18:57:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:45.265 18:57:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.265 18:57:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.266 18:57:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:45.523 /dev/nbd1 00:06:45.523 18:57:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:45.524 18:57:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:45.524 18:57:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:45.524 18:57:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:45.524 18:57:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:45.524 18:57:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:45.524 18:57:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:45.524 18:57:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:45.524 18:57:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:45.524 18:57:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:45.524 18:57:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.524 1+0 records in 00:06:45.524 1+0 records out 00:06:45.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220216 s, 18.6 MB/s 00:06:45.524 18:57:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:45.524 18:57:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:45.524 18:57:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:45.524 18:57:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:45.524 18:57:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:45.524 18:57:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.524 18:57:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.524 18:57:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.524 18:57:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.524 18:57:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:45.782 { 00:06:45.782 "nbd_device": "/dev/nbd0", 00:06:45.782 "bdev_name": "Malloc0" 00:06:45.782 }, 00:06:45.782 { 00:06:45.782 "nbd_device": "/dev/nbd1", 00:06:45.782 "bdev_name": "Malloc1" 00:06:45.782 } 00:06:45.782 ]' 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:45.782 { 00:06:45.782 "nbd_device": "/dev/nbd0", 00:06:45.782 "bdev_name": "Malloc0" 00:06:45.782 }, 00:06:45.782 { 00:06:45.782 "nbd_device": "/dev/nbd1", 00:06:45.782 "bdev_name": "Malloc1" 00:06:45.782 } 00:06:45.782 ]' 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:45.782 /dev/nbd1' 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:45.782 /dev/nbd1' 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:45.782 256+0 records in 00:06:45.782 256+0 records out 00:06:45.782 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00505402 s, 207 MB/s 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:45.782 256+0 records in 00:06:45.782 256+0 records out 00:06:45.782 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242284 s, 43.3 MB/s 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:45.782 256+0 records in 00:06:45.782 256+0 records out 00:06:45.782 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259184 s, 40.5 MB/s 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.782 18:57:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:46.041 18:57:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:46.299 18:57:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:46.299 18:57:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:46.299 18:57:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.299 18:57:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.299 18:57:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:46.299 18:57:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.299 18:57:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.299 18:57:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.299 18:57:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:46.557 18:57:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:46.557 18:57:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:46.557 18:57:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:46.557 18:57:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.557 18:57:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.557 18:57:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:46.557 18:57:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.557 18:57:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.557 18:57:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.557 18:57:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.557 18:57:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.815 18:57:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:46.815 18:57:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:46.815 18:57:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.815 18:57:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:46.815 18:57:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:46.815 18:57:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.815 18:57:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:46.815 18:57:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:46.815 18:57:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:46.815 18:57:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:46.815 18:57:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:46.815 18:57:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:46.815 18:57:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:47.073 18:57:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:47.331 [2024-07-25 18:57:39.642058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.331 [2024-07-25 18:57:39.751617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.331 [2024-07-25 18:57:39.751617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.590 [2024-07-25 18:57:39.815895] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:47.590 [2024-07-25 18:57:39.815960] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:50.120 18:57:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:50.120 18:57:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:50.120 spdk_app_start Round 1 00:06:50.120 18:57:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 778322 /var/tmp/spdk-nbd.sock 00:06:50.120 18:57:42 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 778322 ']' 00:06:50.120 18:57:42 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.120 18:57:42 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.120 18:57:42 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.120 18:57:42 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.120 18:57:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:50.378 18:57:42 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.378 18:57:42 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:50.378 18:57:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.378 Malloc0 00:06:50.635 18:57:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.893 Malloc1 00:06:50.893 18:57:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.893 18:57:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.893 18:57:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.893 18:57:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:50.893 18:57:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.893 18:57:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:50.893 18:57:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.893 18:57:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.893 18:57:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.893 18:57:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:50.893 18:57:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.894 18:57:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:50.894 18:57:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:50.894 18:57:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:50.894 18:57:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.894 18:57:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:51.151 /dev/nbd0 00:06:51.151 18:57:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.151 18:57:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.151 18:57:43 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:51.151 18:57:43 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:51.151 18:57:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:51.151 18:57:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:51.151 18:57:43 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:51.151 18:57:43 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:51.151 18:57:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:51.151 18:57:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:51.151 18:57:43 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.151 1+0 records in 00:06:51.151 1+0 records out 00:06:51.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174078 s, 23.5 MB/s 00:06:51.151 18:57:43 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.151 18:57:43 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:51.151 18:57:43 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.151 18:57:43 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:51.151 18:57:43 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:51.151 18:57:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.151 18:57:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.151 18:57:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:51.408 /dev/nbd1 00:06:51.408 18:57:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:51.408 18:57:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:51.408 18:57:43 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:51.408 18:57:43 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:51.408 18:57:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:51.408 18:57:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:51.408 18:57:43 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:51.408 18:57:43 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:51.408 18:57:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:51.408 18:57:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:51.408 18:57:43 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.408 1+0 records in 00:06:51.408 1+0 records out 00:06:51.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185116 s, 22.1 MB/s 00:06:51.408 18:57:43 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.408 18:57:43 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:51.408 18:57:43 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.408 18:57:43 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:51.408 18:57:43 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:51.408 18:57:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.408 18:57:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.408 18:57:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.408 18:57:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.408 18:57:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.667 18:57:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:51.667 { 00:06:51.667 "nbd_device": "/dev/nbd0", 00:06:51.667 "bdev_name": "Malloc0" 00:06:51.667 }, 00:06:51.667 { 00:06:51.667 "nbd_device": "/dev/nbd1", 00:06:51.667 "bdev_name": "Malloc1" 00:06:51.667 } 00:06:51.667 ]' 00:06:51.667 18:57:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:51.667 { 00:06:51.667 "nbd_device": "/dev/nbd0", 00:06:51.667 "bdev_name": "Malloc0" 00:06:51.667 }, 00:06:51.667 { 00:06:51.667 "nbd_device": "/dev/nbd1", 00:06:51.667 "bdev_name": "Malloc1" 00:06:51.667 } 00:06:51.667 ]' 00:06:51.667 18:57:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.667 18:57:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:51.667 /dev/nbd1' 00:06:51.667 18:57:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:51.667 /dev/nbd1' 00:06:51.667 18:57:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.667 18:57:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:51.667 18:57:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:51.667 18:57:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:51.667 18:57:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:51.667 18:57:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:51.667 18:57:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.667 18:57:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.667 18:57:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:51.667 18:57:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:51.667 18:57:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:51.667 18:57:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:51.667 256+0 records in 00:06:51.667 256+0 records out 00:06:51.667 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00497293 s, 211 MB/s 00:06:51.667 18:57:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.667 18:57:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:51.667 256+0 records in 00:06:51.667 256+0 records out 00:06:51.667 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240657 s, 43.6 MB/s 00:06:51.667 18:57:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.667 18:57:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:51.667 256+0 records in 00:06:51.667 256+0 records out 00:06:51.667 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239811 s, 43.7 MB/s 00:06:51.667 18:57:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:51.667 18:57:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.667 18:57:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.667 18:57:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:51.667 18:57:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:51.667 18:57:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:51.667 18:57:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:51.667 18:57:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.667 18:57:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:51.667 18:57:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.667 18:57:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:51.667 18:57:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:51.667 18:57:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:51.667 18:57:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.667 18:57:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.667 18:57:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.667 18:57:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:51.667 18:57:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.667 18:57:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:51.925 18:57:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:51.925 18:57:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:51.925 18:57:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:51.925 18:57:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.925 18:57:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.925 18:57:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:51.925 18:57:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:51.925 18:57:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.925 18:57:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.925 18:57:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:52.183 18:57:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:52.183 18:57:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:52.183 18:57:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:52.183 18:57:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.183 18:57:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.183 18:57:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:52.183 18:57:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.183 18:57:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.183 18:57:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.183 18:57:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.183 18:57:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.440 18:57:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:52.440 18:57:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:52.440 18:57:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.440 18:57:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.440 18:57:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.440 18:57:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.440 18:57:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:52.440 18:57:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:52.440 18:57:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:52.440 18:57:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:52.440 18:57:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:52.440 18:57:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:52.440 18:57:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:52.698 18:57:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:53.264 [2024-07-25 18:57:45.444722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.264 [2024-07-25 18:57:45.564324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.264 [2024-07-25 18:57:45.564329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.264 [2024-07-25 18:57:45.629449] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:53.264 [2024-07-25 18:57:45.629522] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:55.824 18:57:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:55.824 18:57:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:55.824 spdk_app_start Round 2 00:06:55.824 18:57:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 778322 /var/tmp/spdk-nbd.sock 00:06:55.824 18:57:48 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 778322 ']' 00:06:55.824 18:57:48 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:55.824 18:57:48 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.824 18:57:48 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:55.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:55.824 18:57:48 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.825 18:57:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:56.083 18:57:48 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.083 18:57:48 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:56.083 18:57:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:56.341 Malloc0 00:06:56.341 18:57:48 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:56.600 Malloc1 00:06:56.600 18:57:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:56.600 18:57:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.600 18:57:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:56.600 18:57:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:56.600 18:57:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.600 18:57:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:56.600 18:57:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:56.600 18:57:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.600 18:57:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:56.600 18:57:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:56.600 18:57:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.600 18:57:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:56.600 18:57:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:56.600 18:57:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:56.600 18:57:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.600 18:57:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:56.858 /dev/nbd0 00:06:56.858 18:57:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:56.858 18:57:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:56.858 18:57:49 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:56.858 18:57:49 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:56.858 18:57:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:56.858 18:57:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:56.858 18:57:49 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:56.858 18:57:49 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:56.858 18:57:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:56.858 18:57:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:56.858 18:57:49 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.858 1+0 records in 00:06:56.858 1+0 records out 00:06:56.858 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194639 s, 21.0 MB/s 00:06:56.858 18:57:49 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.858 18:57:49 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:56.858 18:57:49 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.858 18:57:49 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:56.858 18:57:49 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:56.858 18:57:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.858 18:57:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.858 18:57:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:57.116 /dev/nbd1 00:06:57.116 18:57:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:57.116 18:57:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:57.116 18:57:49 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:57.116 18:57:49 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:57.116 18:57:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:57.116 18:57:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:57.116 18:57:49 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:57.116 18:57:49 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:57.116 18:57:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:57.116 18:57:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:57.116 18:57:49 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:57.116 1+0 records in 00:06:57.116 1+0 records out 00:06:57.116 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0166735 s, 246 kB/s 00:06:57.116 18:57:49 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:57.116 18:57:49 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:57.116 18:57:49 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:57.116 18:57:49 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:57.116 18:57:49 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:57.116 18:57:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.116 18:57:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:57.116 18:57:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.116 18:57:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.116 18:57:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.374 18:57:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:57.374 { 00:06:57.374 "nbd_device": "/dev/nbd0", 00:06:57.374 "bdev_name": "Malloc0" 00:06:57.374 }, 00:06:57.374 { 00:06:57.374 "nbd_device": "/dev/nbd1", 00:06:57.374 "bdev_name": "Malloc1" 00:06:57.374 } 00:06:57.374 ]' 00:06:57.374 18:57:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:57.374 { 00:06:57.374 "nbd_device": "/dev/nbd0", 00:06:57.374 "bdev_name": "Malloc0" 00:06:57.374 }, 00:06:57.374 { 00:06:57.374 "nbd_device": "/dev/nbd1", 00:06:57.374 "bdev_name": "Malloc1" 00:06:57.374 } 00:06:57.374 ]' 00:06:57.374 18:57:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.374 18:57:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:57.374 /dev/nbd1' 00:06:57.374 18:57:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:57.374 /dev/nbd1' 00:06:57.374 18:57:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.374 18:57:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:57.374 18:57:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:57.374 18:57:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:57.374 18:57:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:57.374 18:57:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:57.374 18:57:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.374 18:57:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:57.374 18:57:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:57.374 18:57:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:57.374 18:57:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:57.374 18:57:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:57.374 256+0 records in 00:06:57.374 256+0 records out 00:06:57.374 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504723 s, 208 MB/s 00:06:57.374 18:57:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.374 18:57:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:57.374 256+0 records in 00:06:57.374 256+0 records out 00:06:57.374 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224432 s, 46.7 MB/s 00:06:57.374 18:57:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.374 18:57:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:57.632 256+0 records in 00:06:57.632 256+0 records out 00:06:57.632 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243017 s, 43.1 MB/s 00:06:57.632 18:57:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:57.632 18:57:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.632 18:57:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:57.632 18:57:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:57.632 18:57:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:57.632 18:57:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:57.632 18:57:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:57.632 18:57:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.632 18:57:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:57.632 18:57:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.632 18:57:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:57.632 18:57:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:57.632 18:57:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:57.632 18:57:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.632 18:57:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.632 18:57:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:57.632 18:57:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:57.632 18:57:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.632 18:57:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:57.890 18:57:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:57.890 18:57:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:57.890 18:57:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:57.890 18:57:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.890 18:57:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.890 18:57:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:57.890 18:57:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:57.890 18:57:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.890 18:57:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.890 18:57:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:58.148 18:57:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:58.148 18:57:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:58.148 18:57:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:58.148 18:57:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.148 18:57:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.148 18:57:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:58.148 18:57:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:58.148 18:57:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.148 18:57:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.148 18:57:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.148 18:57:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.405 18:57:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:58.405 18:57:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:58.405 18:57:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.405 18:57:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:58.406 18:57:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:58.406 18:57:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.406 18:57:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:58.406 18:57:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:58.406 18:57:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:58.406 18:57:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:58.406 18:57:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:58.406 18:57:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:58.406 18:57:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:58.664 18:57:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:58.922 [2024-07-25 18:57:51.242531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:58.922 [2024-07-25 18:57:51.360653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.922 [2024-07-25 18:57:51.360654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.180 [2024-07-25 18:57:51.425755] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:59.180 [2024-07-25 18:57:51.425826] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:01.736 18:57:53 event.app_repeat -- event/event.sh@38 -- # waitforlisten 778322 /var/tmp/spdk-nbd.sock 00:07:01.736 18:57:53 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 778322 ']' 00:07:01.736 18:57:53 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:01.736 18:57:53 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.736 18:57:53 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:01.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:01.736 18:57:53 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.736 18:57:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:01.994 18:57:54 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.994 18:57:54 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:01.994 18:57:54 event.app_repeat -- event/event.sh@39 -- # killprocess 778322 00:07:01.994 18:57:54 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 778322 ']' 00:07:01.994 18:57:54 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 778322 00:07:01.994 18:57:54 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:01.994 18:57:54 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.994 18:57:54 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 778322 00:07:01.994 18:57:54 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.994 18:57:54 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.994 18:57:54 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 778322' 00:07:01.994 killing process with pid 778322 00:07:01.994 18:57:54 event.app_repeat -- common/autotest_common.sh@969 -- # kill 778322 00:07:01.994 18:57:54 event.app_repeat -- common/autotest_common.sh@974 -- # wait 778322 00:07:02.252 spdk_app_start is called in Round 0. 00:07:02.252 Shutdown signal received, stop current app iteration 00:07:02.252 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:07:02.252 spdk_app_start is called in Round 1. 00:07:02.252 Shutdown signal received, stop current app iteration 00:07:02.252 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:07:02.252 spdk_app_start is called in Round 2. 00:07:02.252 Shutdown signal received, stop current app iteration 00:07:02.252 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:07:02.252 spdk_app_start is called in Round 3. 00:07:02.252 Shutdown signal received, stop current app iteration 00:07:02.252 18:57:54 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:02.252 18:57:54 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:02.252 00:07:02.252 real 0m18.061s 00:07:02.252 user 0m38.979s 00:07:02.252 sys 0m3.233s 00:07:02.252 18:57:54 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.252 18:57:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:02.252 ************************************ 00:07:02.252 END TEST app_repeat 00:07:02.252 ************************************ 00:07:02.252 18:57:54 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:02.252 18:57:54 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:02.252 18:57:54 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.252 18:57:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.252 18:57:54 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.252 ************************************ 00:07:02.252 START TEST cpu_locks 00:07:02.252 ************************************ 00:07:02.252 18:57:54 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:02.252 * Looking for test storage... 00:07:02.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:02.252 18:57:54 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:02.252 18:57:54 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:02.252 18:57:54 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:02.252 18:57:54 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:02.252 18:57:54 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.252 18:57:54 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.252 18:57:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.252 ************************************ 00:07:02.252 START TEST default_locks 00:07:02.253 ************************************ 00:07:02.253 18:57:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:02.253 18:57:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=780673 00:07:02.253 18:57:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:02.253 18:57:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 780673 00:07:02.253 18:57:54 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 780673 ']' 00:07:02.253 18:57:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.253 18:57:54 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.253 18:57:54 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.253 18:57:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.253 18:57:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.253 [2024-07-25 18:57:54.672523] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:02.253 [2024-07-25 18:57:54.672602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780673 ] 00:07:02.253 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.512 [2024-07-25 18:57:54.738493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.512 [2024-07-25 18:57:54.847112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.445 18:57:55 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.445 18:57:55 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:03.445 18:57:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 780673 00:07:03.445 18:57:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 780673 00:07:03.445 18:57:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:03.703 lslocks: write error 00:07:03.703 18:57:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 780673 00:07:03.703 18:57:55 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 780673 ']' 00:07:03.703 18:57:55 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 780673 00:07:03.703 18:57:55 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:03.703 18:57:55 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:03.703 18:57:55 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 780673 00:07:03.703 18:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:03.703 18:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:03.703 18:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 780673' 00:07:03.703 killing process with pid 780673 00:07:03.703 18:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 780673 00:07:03.703 18:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 780673 00:07:04.269 18:57:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 780673 00:07:04.269 18:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:04.269 18:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 780673 00:07:04.269 18:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:04.269 18:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.269 18:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:04.269 18:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.269 18:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 780673 00:07:04.269 18:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 780673 ']' 00:07:04.269 18:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.269 18:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.269 18:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.269 18:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.269 18:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (780673) - No such process 00:07:04.269 ERROR: process (pid: 780673) is no longer running 00:07:04.269 18:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.269 18:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:04.269 18:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:04.269 18:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.269 18:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.269 18:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.269 18:57:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:04.269 18:57:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:04.269 18:57:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:04.269 18:57:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:04.269 00:07:04.269 real 0m1.858s 00:07:04.269 user 0m1.969s 00:07:04.269 sys 0m0.599s 00:07:04.269 18:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.269 18:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.269 ************************************ 00:07:04.269 END TEST default_locks 00:07:04.269 ************************************ 00:07:04.269 18:57:56 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:04.269 18:57:56 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.269 18:57:56 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.269 18:57:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.269 ************************************ 00:07:04.269 START TEST default_locks_via_rpc 00:07:04.269 ************************************ 00:07:04.269 18:57:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:04.269 18:57:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=780933 00:07:04.269 18:57:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:04.269 18:57:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 780933 00:07:04.269 18:57:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 780933 ']' 00:07:04.269 18:57:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.269 18:57:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.269 18:57:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.269 18:57:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.269 18:57:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.269 [2024-07-25 18:57:56.577551] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:04.269 [2024-07-25 18:57:56.577640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780933 ] 00:07:04.269 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.269 [2024-07-25 18:57:56.645021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.528 [2024-07-25 18:57:56.755190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.786 18:57:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.786 18:57:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:04.786 18:57:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:04.786 18:57:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.786 18:57:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.786 18:57:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.786 18:57:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:04.786 18:57:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:04.786 18:57:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:04.786 18:57:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:04.786 18:57:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:04.786 18:57:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.786 18:57:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.786 18:57:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.786 18:57:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 780933 00:07:04.786 18:57:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 780933 00:07:04.786 18:57:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:05.043 18:57:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 780933 00:07:05.043 18:57:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 780933 ']' 00:07:05.043 18:57:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 780933 00:07:05.043 18:57:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:05.043 18:57:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.043 18:57:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 780933 00:07:05.043 18:57:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:05.043 18:57:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:05.043 18:57:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 780933' 00:07:05.043 killing process with pid 780933 00:07:05.043 18:57:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 780933 00:07:05.043 18:57:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 780933 00:07:05.607 00:07:05.607 real 0m1.264s 00:07:05.607 user 0m1.200s 00:07:05.607 sys 0m0.545s 00:07:05.607 18:57:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.607 18:57:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.607 ************************************ 00:07:05.607 END TEST default_locks_via_rpc 00:07:05.607 ************************************ 00:07:05.607 18:57:57 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:05.607 18:57:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.607 18:57:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.607 18:57:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.607 ************************************ 00:07:05.607 START TEST non_locking_app_on_locked_coremask 00:07:05.607 ************************************ 00:07:05.607 18:57:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:05.607 18:57:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=781130 00:07:05.607 18:57:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:05.607 18:57:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 781130 /var/tmp/spdk.sock 00:07:05.607 18:57:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 781130 ']' 00:07:05.607 18:57:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.607 18:57:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.607 18:57:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.607 18:57:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.607 18:57:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.607 [2024-07-25 18:57:57.891659] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:05.607 [2024-07-25 18:57:57.891756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781130 ] 00:07:05.607 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.607 [2024-07-25 18:57:57.962723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.863 [2024-07-25 18:57:58.080553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.427 18:57:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.427 18:57:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:06.427 18:57:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=781267 00:07:06.427 18:57:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:06.427 18:57:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 781267 /var/tmp/spdk2.sock 00:07:06.427 18:57:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 781267 ']' 00:07:06.427 18:57:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.427 18:57:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.427 18:57:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.427 18:57:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.427 18:57:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.427 [2024-07-25 18:57:58.881182] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:06.427 [2024-07-25 18:57:58.881261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781267 ] 00:07:06.685 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.685 [2024-07-25 18:57:58.978029] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:06.685 [2024-07-25 18:57:58.978069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.943 [2024-07-25 18:57:59.216510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.508 18:57:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.508 18:57:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:07.508 18:57:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 781130 00:07:07.508 18:57:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 781130 00:07:07.508 18:57:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:08.073 lslocks: write error 00:07:08.073 18:58:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 781130 00:07:08.073 18:58:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 781130 ']' 00:07:08.073 18:58:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 781130 00:07:08.073 18:58:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:08.073 18:58:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.073 18:58:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 781130 00:07:08.073 18:58:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.073 18:58:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.073 18:58:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 781130' 00:07:08.073 killing process with pid 781130 00:07:08.073 18:58:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 781130 00:07:08.073 18:58:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 781130 00:07:09.005 18:58:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 781267 00:07:09.005 18:58:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 781267 ']' 00:07:09.005 18:58:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 781267 00:07:09.005 18:58:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:09.005 18:58:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.005 18:58:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 781267 00:07:09.005 18:58:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:09.005 18:58:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:09.006 18:58:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 781267' 00:07:09.006 killing process with pid 781267 00:07:09.006 18:58:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 781267 00:07:09.006 18:58:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 781267 00:07:09.572 00:07:09.572 real 0m3.899s 00:07:09.572 user 0m4.221s 00:07:09.572 sys 0m1.110s 00:07:09.572 18:58:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.572 18:58:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.572 ************************************ 00:07:09.572 END TEST non_locking_app_on_locked_coremask 00:07:09.572 ************************************ 00:07:09.572 18:58:01 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:09.572 18:58:01 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:09.572 18:58:01 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.572 18:58:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.572 ************************************ 00:07:09.572 START TEST locking_app_on_unlocked_coremask 00:07:09.572 ************************************ 00:07:09.572 18:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:09.572 18:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=781574 00:07:09.572 18:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:09.572 18:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 781574 /var/tmp/spdk.sock 00:07:09.572 18:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 781574 ']' 00:07:09.572 18:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.572 18:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.572 18:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.572 18:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.572 18:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.572 [2024-07-25 18:58:01.835226] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:09.572 [2024-07-25 18:58:01.835320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781574 ] 00:07:09.572 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.572 [2024-07-25 18:58:01.900940] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:09.572 [2024-07-25 18:58:01.900977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.572 [2024-07-25 18:58:02.011126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.830 18:58:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.830 18:58:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:09.830 18:58:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=781701 00:07:09.830 18:58:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:09.830 18:58:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 781701 /var/tmp/spdk2.sock 00:07:09.830 18:58:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 781701 ']' 00:07:09.830 18:58:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.830 18:58:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.830 18:58:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.830 18:58:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.830 18:58:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.088 [2024-07-25 18:58:02.318922] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:10.088 [2024-07-25 18:58:02.319006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781701 ] 00:07:10.088 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.088 [2024-07-25 18:58:02.427120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.345 [2024-07-25 18:58:02.666390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.910 18:58:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.910 18:58:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:10.910 18:58:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 781701 00:07:10.910 18:58:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 781701 00:07:10.910 18:58:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.474 lslocks: write error 00:07:11.474 18:58:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 781574 00:07:11.474 18:58:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 781574 ']' 00:07:11.474 18:58:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 781574 00:07:11.474 18:58:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:11.474 18:58:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:11.474 18:58:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 781574 00:07:11.474 18:58:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:11.474 18:58:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:11.474 18:58:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 781574' 00:07:11.474 killing process with pid 781574 00:07:11.474 18:58:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 781574 00:07:11.474 18:58:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 781574 00:07:12.405 18:58:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 781701 00:07:12.405 18:58:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 781701 ']' 00:07:12.405 18:58:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 781701 00:07:12.405 18:58:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:12.405 18:58:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.405 18:58:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 781701 00:07:12.405 18:58:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:12.405 18:58:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:12.405 18:58:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 781701' 00:07:12.405 killing process with pid 781701 00:07:12.405 18:58:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 781701 00:07:12.405 18:58:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 781701 00:07:12.996 00:07:12.996 real 0m3.425s 00:07:12.996 user 0m3.534s 00:07:12.996 sys 0m1.051s 00:07:12.996 18:58:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.996 18:58:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.996 ************************************ 00:07:12.996 END TEST locking_app_on_unlocked_coremask 00:07:12.996 ************************************ 00:07:12.996 18:58:05 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:12.996 18:58:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.996 18:58:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.996 18:58:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.996 ************************************ 00:07:12.996 START TEST locking_app_on_locked_coremask 00:07:12.996 ************************************ 00:07:12.996 18:58:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:12.996 18:58:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=782008 00:07:12.996 18:58:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:12.996 18:58:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 782008 /var/tmp/spdk.sock 00:07:12.996 18:58:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 782008 ']' 00:07:12.996 18:58:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.996 18:58:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.996 18:58:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.996 18:58:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.996 18:58:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.996 [2024-07-25 18:58:05.314754] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:12.996 [2024-07-25 18:58:05.314834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782008 ] 00:07:12.996 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.996 [2024-07-25 18:58:05.387020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.255 [2024-07-25 18:58:05.508993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.820 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.820 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:13.820 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=782146 00:07:13.820 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:13.820 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 782146 /var/tmp/spdk2.sock 00:07:13.820 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:13.820 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 782146 /var/tmp/spdk2.sock 00:07:13.820 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:13.820 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.820 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:13.820 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.820 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 782146 /var/tmp/spdk2.sock 00:07:13.820 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 782146 ']' 00:07:13.820 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.820 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.820 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.820 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.820 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.078 [2024-07-25 18:58:06.311199] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:14.078 [2024-07-25 18:58:06.311284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782146 ] 00:07:14.078 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.078 [2024-07-25 18:58:06.421084] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 782008 has claimed it. 00:07:14.078 [2024-07-25 18:58:06.421152] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:14.642 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (782146) - No such process 00:07:14.642 ERROR: process (pid: 782146) is no longer running 00:07:14.643 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.643 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:14.643 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:14.643 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:14.643 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:14.643 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:14.643 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 782008 00:07:14.643 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 782008 00:07:14.643 18:58:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:14.900 lslocks: write error 00:07:14.900 18:58:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 782008 00:07:14.900 18:58:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 782008 ']' 00:07:14.900 18:58:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 782008 00:07:14.900 18:58:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:14.900 18:58:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.900 18:58:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 782008 00:07:14.900 18:58:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.900 18:58:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.900 18:58:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 782008' 00:07:14.900 killing process with pid 782008 00:07:14.900 18:58:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 782008 00:07:14.900 18:58:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 782008 00:07:15.464 00:07:15.464 real 0m2.549s 00:07:15.464 user 0m2.881s 00:07:15.464 sys 0m0.693s 00:07:15.464 18:58:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.464 18:58:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.464 ************************************ 00:07:15.464 END TEST locking_app_on_locked_coremask 00:07:15.464 ************************************ 00:07:15.464 18:58:07 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:15.464 18:58:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.464 18:58:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.464 18:58:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.464 ************************************ 00:07:15.464 START TEST locking_overlapped_coremask 00:07:15.464 ************************************ 00:07:15.464 18:58:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:15.464 18:58:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=782440 00:07:15.464 18:58:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:15.464 18:58:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 782440 /var/tmp/spdk.sock 00:07:15.464 18:58:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 782440 ']' 00:07:15.464 18:58:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.464 18:58:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.464 18:58:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.464 18:58:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.464 18:58:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.464 [2024-07-25 18:58:07.913716] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:15.464 [2024-07-25 18:58:07.913805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782440 ] 00:07:15.722 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.722 [2024-07-25 18:58:07.988956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.722 [2024-07-25 18:58:08.116128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.722 [2024-07-25 18:58:08.116177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.722 [2024-07-25 18:58:08.116181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.981 18:58:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.981 18:58:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:15.981 18:58:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=782446 00:07:15.981 18:58:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 782446 /var/tmp/spdk2.sock 00:07:15.981 18:58:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:15.981 18:58:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 782446 /var/tmp/spdk2.sock 00:07:15.981 18:58:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:15.981 18:58:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:15.981 18:58:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.981 18:58:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:15.981 18:58:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.981 18:58:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 782446 /var/tmp/spdk2.sock 00:07:15.981 18:58:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 782446 ']' 00:07:15.981 18:58:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.981 18:58:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.981 18:58:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.981 18:58:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.981 18:58:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.981 [2024-07-25 18:58:08.430690] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:15.981 [2024-07-25 18:58:08.430775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782446 ] 00:07:16.238 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.238 [2024-07-25 18:58:08.532222] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 782440 has claimed it. 00:07:16.238 [2024-07-25 18:58:08.532282] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:16.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (782446) - No such process 00:07:16.805 ERROR: process (pid: 782446) is no longer running 00:07:16.805 18:58:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.805 18:58:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:16.805 18:58:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:16.805 18:58:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:16.805 18:58:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:16.805 18:58:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:16.805 18:58:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:16.805 18:58:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:16.805 18:58:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:16.805 18:58:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:16.805 18:58:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 782440 00:07:16.805 18:58:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 782440 ']' 00:07:16.805 18:58:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 782440 00:07:16.805 18:58:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:16.805 18:58:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.805 18:58:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 782440 00:07:16.805 18:58:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:16.805 18:58:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:16.805 18:58:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 782440' 00:07:16.805 killing process with pid 782440 00:07:16.805 18:58:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 782440 00:07:16.805 18:58:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 782440 00:07:17.370 00:07:17.370 real 0m1.746s 00:07:17.370 user 0m4.560s 00:07:17.370 sys 0m0.484s 00:07:17.370 18:58:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.370 18:58:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.370 ************************************ 00:07:17.370 END TEST locking_overlapped_coremask 00:07:17.370 ************************************ 00:07:17.370 18:58:09 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:17.370 18:58:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:17.370 18:58:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.370 18:58:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.371 ************************************ 00:07:17.371 START TEST locking_overlapped_coremask_via_rpc 00:07:17.371 ************************************ 00:07:17.371 18:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:17.371 18:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=782608 00:07:17.371 18:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:17.371 18:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 782608 /var/tmp/spdk.sock 00:07:17.371 18:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 782608 ']' 00:07:17.371 18:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.371 18:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.371 18:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.371 18:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.371 18:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.371 [2024-07-25 18:58:09.718540] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:17.371 [2024-07-25 18:58:09.718628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782608 ] 00:07:17.371 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.371 [2024-07-25 18:58:09.791463] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:17.371 [2024-07-25 18:58:09.791498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:17.629 [2024-07-25 18:58:09.912463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.629 [2024-07-25 18:58:09.912529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.629 [2024-07-25 18:58:09.912533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.196 18:58:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.196 18:58:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:18.196 18:58:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=782746 00:07:18.196 18:58:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:18.196 18:58:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 782746 /var/tmp/spdk2.sock 00:07:18.197 18:58:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 782746 ']' 00:07:18.197 18:58:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.197 18:58:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.197 18:58:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.197 18:58:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.197 18:58:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.455 [2024-07-25 18:58:10.698695] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:18.455 [2024-07-25 18:58:10.698776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782746 ] 00:07:18.455 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.455 [2024-07-25 18:58:10.800588] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:18.455 [2024-07-25 18:58:10.800624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.714 [2024-07-25 18:58:11.024650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.714 [2024-07-25 18:58:11.024711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:18.714 [2024-07-25 18:58:11.024714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.280 [2024-07-25 18:58:11.645223] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 782608 has claimed it. 00:07:19.280 request: 00:07:19.280 { 00:07:19.280 "method": "framework_enable_cpumask_locks", 00:07:19.280 "req_id": 1 00:07:19.280 } 00:07:19.280 Got JSON-RPC error response 00:07:19.280 response: 00:07:19.280 { 00:07:19.280 "code": -32603, 00:07:19.280 "message": "Failed to claim CPU core: 2" 00:07:19.280 } 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 782608 /var/tmp/spdk.sock 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 782608 ']' 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.280 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.538 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.538 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:19.538 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 782746 /var/tmp/spdk2.sock 00:07:19.538 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 782746 ']' 00:07:19.538 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.538 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.538 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.538 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.538 18:58:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.796 18:58:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.796 18:58:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:19.796 18:58:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:19.796 18:58:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:19.796 18:58:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:19.796 18:58:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:19.796 00:07:19.796 real 0m2.483s 00:07:19.796 user 0m1.176s 00:07:19.796 sys 0m0.240s 00:07:19.796 18:58:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.796 18:58:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.796 ************************************ 00:07:19.796 END TEST locking_overlapped_coremask_via_rpc 00:07:19.796 ************************************ 00:07:19.796 18:58:12 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:19.796 18:58:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 782608 ]] 00:07:19.796 18:58:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 782608 00:07:19.796 18:58:12 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 782608 ']' 00:07:19.796 18:58:12 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 782608 00:07:19.796 18:58:12 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:19.796 18:58:12 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.796 18:58:12 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 782608 00:07:19.796 18:58:12 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.796 18:58:12 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.796 18:58:12 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 782608' 00:07:19.796 killing process with pid 782608 00:07:19.796 18:58:12 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 782608 00:07:19.796 18:58:12 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 782608 00:07:20.362 18:58:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 782746 ]] 00:07:20.362 18:58:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 782746 00:07:20.362 18:58:12 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 782746 ']' 00:07:20.362 18:58:12 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 782746 00:07:20.362 18:58:12 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:20.362 18:58:12 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.362 18:58:12 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 782746 00:07:20.362 18:58:12 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:20.362 18:58:12 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:20.362 18:58:12 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 782746' 00:07:20.362 killing process with pid 782746 00:07:20.362 18:58:12 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 782746 00:07:20.362 18:58:12 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 782746 00:07:20.927 18:58:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:20.927 18:58:13 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:20.927 18:58:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 782608 ]] 00:07:20.927 18:58:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 782608 00:07:20.927 18:58:13 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 782608 ']' 00:07:20.927 18:58:13 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 782608 00:07:20.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (782608) - No such process 00:07:20.927 18:58:13 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 782608 is not found' 00:07:20.927 Process with pid 782608 is not found 00:07:20.927 18:58:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 782746 ]] 00:07:20.927 18:58:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 782746 00:07:20.927 18:58:13 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 782746 ']' 00:07:20.927 18:58:13 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 782746 00:07:20.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (782746) - No such process 00:07:20.927 18:58:13 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 782746 is not found' 00:07:20.927 Process with pid 782746 is not found 00:07:20.927 18:58:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:20.927 00:07:20.927 real 0m18.591s 00:07:20.927 user 0m31.878s 00:07:20.927 sys 0m5.628s 00:07:20.927 18:58:13 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.927 18:58:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.927 ************************************ 00:07:20.927 END TEST cpu_locks 00:07:20.927 ************************************ 00:07:20.927 00:07:20.927 real 0m43.378s 00:07:20.927 user 1m20.386s 00:07:20.927 sys 0m9.729s 00:07:20.927 18:58:13 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.927 18:58:13 event -- common/autotest_common.sh@10 -- # set +x 00:07:20.927 ************************************ 00:07:20.927 END TEST event 00:07:20.927 ************************************ 00:07:20.927 18:58:13 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:20.927 18:58:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.927 18:58:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.927 18:58:13 -- common/autotest_common.sh@10 -- # set +x 00:07:20.927 ************************************ 00:07:20.927 START TEST thread 00:07:20.927 ************************************ 00:07:20.927 18:58:13 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:20.927 * Looking for test storage... 00:07:20.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:20.927 18:58:13 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:20.927 18:58:13 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:20.927 18:58:13 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.927 18:58:13 thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.927 ************************************ 00:07:20.927 START TEST thread_poller_perf 00:07:20.927 ************************************ 00:07:20.927 18:58:13 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:20.927 [2024-07-25 18:58:13.283526] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:20.927 [2024-07-25 18:58:13.283596] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783130 ] 00:07:20.927 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.927 [2024-07-25 18:58:13.350132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.191 [2024-07-25 18:58:13.460818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.191 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:22.125 ====================================== 00:07:22.125 busy:2714542230 (cyc) 00:07:22.125 total_run_count: 300000 00:07:22.125 tsc_hz: 2700000000 (cyc) 00:07:22.125 ====================================== 00:07:22.125 poller_cost: 9048 (cyc), 3351 (nsec) 00:07:22.125 00:07:22.125 real 0m1.320s 00:07:22.125 user 0m1.228s 00:07:22.125 sys 0m0.085s 00:07:22.125 18:58:14 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.125 18:58:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:22.125 ************************************ 00:07:22.125 END TEST thread_poller_perf 00:07:22.125 ************************************ 00:07:22.383 18:58:14 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:22.383 18:58:14 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:22.383 18:58:14 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.383 18:58:14 thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.384 ************************************ 00:07:22.384 START TEST thread_poller_perf 00:07:22.384 ************************************ 00:07:22.384 18:58:14 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:22.384 [2024-07-25 18:58:14.657596] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:22.384 [2024-07-25 18:58:14.657665] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783389 ] 00:07:22.384 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.384 [2024-07-25 18:58:14.724345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.384 [2024-07-25 18:58:14.840400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.384 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:23.757 ====================================== 00:07:23.757 busy:2702880985 (cyc) 00:07:23.757 total_run_count: 3857000 00:07:23.757 tsc_hz: 2700000000 (cyc) 00:07:23.757 ====================================== 00:07:23.757 poller_cost: 700 (cyc), 259 (nsec) 00:07:23.757 00:07:23.757 real 0m1.318s 00:07:23.757 user 0m1.218s 00:07:23.757 sys 0m0.093s 00:07:23.757 18:58:15 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.757 18:58:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:23.757 ************************************ 00:07:23.757 END TEST thread_poller_perf 00:07:23.757 ************************************ 00:07:23.757 18:58:15 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:23.757 00:07:23.757 real 0m2.781s 00:07:23.757 user 0m2.498s 00:07:23.757 sys 0m0.282s 00:07:23.757 18:58:15 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.757 18:58:15 thread -- common/autotest_common.sh@10 -- # set +x 00:07:23.757 ************************************ 00:07:23.757 END TEST thread 00:07:23.757 ************************************ 00:07:23.757 18:58:16 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:07:23.757 18:58:16 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:23.757 18:58:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.757 18:58:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.757 18:58:16 -- common/autotest_common.sh@10 -- # set +x 00:07:23.757 ************************************ 00:07:23.757 START TEST app_cmdline 00:07:23.757 ************************************ 00:07:23.757 18:58:16 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:23.757 * Looking for test storage... 00:07:23.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:23.757 18:58:16 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:23.757 18:58:16 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=783591 00:07:23.757 18:58:16 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:23.757 18:58:16 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 783591 00:07:23.757 18:58:16 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 783591 ']' 00:07:23.757 18:58:16 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.757 18:58:16 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.757 18:58:16 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.757 18:58:16 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.757 18:58:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:23.757 [2024-07-25 18:58:16.134654] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:23.757 [2024-07-25 18:58:16.134737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783591 ] 00:07:23.757 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.757 [2024-07-25 18:58:16.200801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.016 [2024-07-25 18:58:16.307910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.274 18:58:16 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.274 18:58:16 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:24.274 18:58:16 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:24.532 { 00:07:24.532 "version": "SPDK v24.09-pre git sha1 704257090", 00:07:24.532 "fields": { 00:07:24.532 "major": 24, 00:07:24.532 "minor": 9, 00:07:24.532 "patch": 0, 00:07:24.532 "suffix": "-pre", 00:07:24.532 "commit": "704257090" 00:07:24.532 } 00:07:24.532 } 00:07:24.532 18:58:16 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:24.532 18:58:16 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:24.532 18:58:16 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:24.532 18:58:16 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:24.532 18:58:16 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:24.532 18:58:16 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.532 18:58:16 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:24.532 18:58:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:24.532 18:58:16 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:24.532 18:58:16 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.532 18:58:16 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:24.532 18:58:16 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:24.532 18:58:16 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.532 18:58:16 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:24.532 18:58:16 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.532 18:58:16 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.532 18:58:16 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.532 18:58:16 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.532 18:58:16 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.532 18:58:16 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.532 18:58:16 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.532 18:58:16 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.532 18:58:16 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:24.532 18:58:16 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.790 request: 00:07:24.790 { 00:07:24.790 "method": "env_dpdk_get_mem_stats", 00:07:24.790 "req_id": 1 00:07:24.790 } 00:07:24.790 Got JSON-RPC error response 00:07:24.790 response: 00:07:24.790 { 00:07:24.790 "code": -32601, 00:07:24.790 "message": "Method not found" 00:07:24.790 } 00:07:24.790 18:58:17 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:24.790 18:58:17 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:24.790 18:58:17 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:24.790 18:58:17 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:24.790 18:58:17 app_cmdline -- app/cmdline.sh@1 -- # killprocess 783591 00:07:24.790 18:58:17 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 783591 ']' 00:07:24.790 18:58:17 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 783591 00:07:24.790 18:58:17 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:24.790 18:58:17 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.790 18:58:17 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 783591 00:07:24.790 18:58:17 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:24.790 18:58:17 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:24.790 18:58:17 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 783591' 00:07:24.790 killing process with pid 783591 00:07:24.790 18:58:17 app_cmdline -- common/autotest_common.sh@969 -- # kill 783591 00:07:24.790 18:58:17 app_cmdline -- common/autotest_common.sh@974 -- # wait 783591 00:07:25.357 00:07:25.357 real 0m1.617s 00:07:25.357 user 0m1.927s 00:07:25.357 sys 0m0.488s 00:07:25.357 18:58:17 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.357 18:58:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:25.357 ************************************ 00:07:25.357 END TEST app_cmdline 00:07:25.357 ************************************ 00:07:25.357 18:58:17 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:25.357 18:58:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.357 18:58:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.357 18:58:17 -- common/autotest_common.sh@10 -- # set +x 00:07:25.357 ************************************ 00:07:25.357 START TEST version 00:07:25.357 ************************************ 00:07:25.357 18:58:17 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:25.357 * Looking for test storage... 00:07:25.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:25.357 18:58:17 version -- app/version.sh@17 -- # get_header_version major 00:07:25.357 18:58:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:25.357 18:58:17 version -- app/version.sh@14 -- # cut -f2 00:07:25.357 18:58:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.357 18:58:17 version -- app/version.sh@17 -- # major=24 00:07:25.357 18:58:17 version -- app/version.sh@18 -- # get_header_version minor 00:07:25.357 18:58:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:25.357 18:58:17 version -- app/version.sh@14 -- # cut -f2 00:07:25.357 18:58:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.357 18:58:17 version -- app/version.sh@18 -- # minor=9 00:07:25.357 18:58:17 version -- app/version.sh@19 -- # get_header_version patch 00:07:25.357 18:58:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:25.357 18:58:17 version -- app/version.sh@14 -- # cut -f2 00:07:25.357 18:58:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.357 18:58:17 version -- app/version.sh@19 -- # patch=0 00:07:25.357 18:58:17 version -- app/version.sh@20 -- # get_header_version suffix 00:07:25.357 18:58:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:25.357 18:58:17 version -- app/version.sh@14 -- # cut -f2 00:07:25.357 18:58:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.357 18:58:17 version -- app/version.sh@20 -- # suffix=-pre 00:07:25.357 18:58:17 version -- app/version.sh@22 -- # version=24.9 00:07:25.357 18:58:17 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:25.357 18:58:17 version -- app/version.sh@28 -- # version=24.9rc0 00:07:25.357 18:58:17 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:25.357 18:58:17 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:25.357 18:58:17 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:25.357 18:58:17 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:25.357 00:07:25.357 real 0m0.106s 00:07:25.357 user 0m0.062s 00:07:25.357 sys 0m0.064s 00:07:25.357 18:58:17 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.357 18:58:17 version -- common/autotest_common.sh@10 -- # set +x 00:07:25.357 ************************************ 00:07:25.357 END TEST version 00:07:25.357 ************************************ 00:07:25.616 18:58:17 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:07:25.616 18:58:17 -- spdk/autotest.sh@202 -- # uname -s 00:07:25.616 18:58:17 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:07:25.616 18:58:17 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:25.616 18:58:17 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:25.616 18:58:17 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:07:25.616 18:58:17 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:25.616 18:58:17 -- spdk/autotest.sh@264 -- # timing_exit lib 00:07:25.616 18:58:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:25.616 18:58:17 -- common/autotest_common.sh@10 -- # set +x 00:07:25.616 18:58:17 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:25.616 18:58:17 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:07:25.616 18:58:17 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:07:25.616 18:58:17 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:07:25.616 18:58:17 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:07:25.616 18:58:17 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:07:25.616 18:58:17 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:25.616 18:58:17 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:25.616 18:58:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.616 18:58:17 -- common/autotest_common.sh@10 -- # set +x 00:07:25.616 ************************************ 00:07:25.616 START TEST nvmf_tcp 00:07:25.616 ************************************ 00:07:25.616 18:58:17 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:25.616 * Looking for test storage... 00:07:25.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:25.616 18:58:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:25.616 18:58:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:25.616 18:58:17 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:25.616 18:58:17 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:25.616 18:58:17 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.616 18:58:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:25.616 ************************************ 00:07:25.616 START TEST nvmf_target_core 00:07:25.616 ************************************ 00:07:25.616 18:58:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:25.616 * Looking for test storage... 00:07:25.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:25.616 18:58:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:25.616 18:58:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:25.616 18:58:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:25.616 18:58:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:25.616 18:58:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.616 18:58:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.616 18:58:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.616 18:58:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.616 18:58:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.616 18:58:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.616 18:58:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.616 18:58:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.616 18:58:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.616 18:58:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.616 18:58:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:25.616 18:58:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:25.616 18:58:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.616 18:58:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.616 18:58:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:25.616 18:58:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.616 18:58:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:25.616 18:58:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.616 18:58:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.616 18:58:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:25.617 ************************************ 00:07:25.617 START TEST nvmf_abort 00:07:25.617 ************************************ 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:25.617 * Looking for test storage... 00:07:25.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.617 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:25.876 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.876 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:25.876 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:25.876 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:25.876 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.876 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.876 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.876 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:25.876 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:25.876 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:25.876 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:25.876 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:25.876 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:25.876 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:25.876 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:25.876 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:25.876 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:25.876 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:25.876 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.876 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.876 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.876 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:25.876 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:25.876 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:25.876 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:28.408 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:28.408 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:28.408 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:28.409 Found net devices under 0000:09:00.0: cvl_0_0 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:28.409 Found net devices under 0000:09:00.1: cvl_0_1 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:28.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:28.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:07:28.409 00:07:28.409 --- 10.0.0.2 ping statistics --- 00:07:28.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.409 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:28.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:28.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:07:28.409 00:07:28.409 --- 10.0.0.1 ping statistics --- 00:07:28.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.409 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=785930 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 785930 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 785930 ']' 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.409 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.409 [2024-07-25 18:58:20.657058] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:28.409 [2024-07-25 18:58:20.657166] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.409 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.409 [2024-07-25 18:58:20.736553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.409 [2024-07-25 18:58:20.856887] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.409 [2024-07-25 18:58:20.856934] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.409 [2024-07-25 18:58:20.856964] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:28.409 [2024-07-25 18:58:20.856977] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:28.409 [2024-07-25 18:58:20.856989] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.409 [2024-07-25 18:58:20.857074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.409 [2024-07-25 18:58:20.857198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.409 [2024-07-25 18:58:20.857203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.342 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.342 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:29.342 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:29.342 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:29.342 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:29.342 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:29.342 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:29.342 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.342 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:29.342 [2024-07-25 18:58:21.617387] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.342 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.342 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:29.342 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.342 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:29.342 Malloc0 00:07:29.342 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.342 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:29.342 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.342 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:29.342 Delay0 00:07:29.342 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.342 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:29.342 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.342 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:29.343 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.343 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:29.343 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.343 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:29.343 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.343 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:29.343 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.343 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:29.343 [2024-07-25 18:58:21.686773] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.343 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.343 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:29.343 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.343 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:29.343 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.343 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:29.343 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.343 [2024-07-25 18:58:21.782308] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:31.870 Initializing NVMe Controllers 00:07:31.870 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:31.870 controller IO queue size 128 less than required 00:07:31.870 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:31.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:31.870 Initialization complete. Launching workers. 00:07:31.870 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28712 00:07:31.870 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28773, failed to submit 62 00:07:31.870 success 28716, unsuccess 57, failed 0 00:07:31.870 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:31.870 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.870 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:31.870 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.870 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:31.870 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:31.870 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:31.870 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:31.870 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:31.870 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:31.870 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:31.871 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:31.871 rmmod nvme_tcp 00:07:31.871 rmmod nvme_fabrics 00:07:31.871 rmmod nvme_keyring 00:07:31.871 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:31.871 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:31.871 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:31.871 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 785930 ']' 00:07:31.871 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 785930 00:07:31.871 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 785930 ']' 00:07:31.871 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 785930 00:07:31.871 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:31.871 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.871 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 785930 00:07:31.871 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:31.871 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:31.871 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 785930' 00:07:31.871 killing process with pid 785930 00:07:31.871 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 785930 00:07:31.871 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 785930 00:07:32.153 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:32.153 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:32.153 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:32.153 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:32.153 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:32.153 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.153 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:32.153 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:34.072 00:07:34.072 real 0m8.366s 00:07:34.072 user 0m12.810s 00:07:34.072 sys 0m2.825s 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.072 ************************************ 00:07:34.072 END TEST nvmf_abort 00:07:34.072 ************************************ 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:34.072 ************************************ 00:07:34.072 START TEST nvmf_ns_hotplug_stress 00:07:34.072 ************************************ 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:34.072 * Looking for test storage... 00:07:34.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:34.072 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:34.073 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:34.073 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.073 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:34.073 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.073 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:34.073 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:34.073 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:34.073 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:36.605 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:36.605 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:36.605 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:36.605 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:36.605 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:36.605 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:36.605 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:36.605 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:36.605 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:36.605 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:36.605 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:36.606 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:36.606 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:36.606 Found net devices under 0000:09:00.0: cvl_0_0 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:36.606 Found net devices under 0000:09:00.1: cvl_0_1 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:36.606 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:36.607 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:36.607 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:36.607 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:36.607 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:36.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:36.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:07:36.607 00:07:36.607 --- 10.0.0.2 ping statistics --- 00:07:36.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.607 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:07:36.607 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:36.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:36.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:07:36.607 00:07:36.607 --- 10.0.0.1 ping statistics --- 00:07:36.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.607 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:07:36.607 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:36.607 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:36.607 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:36.607 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:36.607 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:36.607 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:36.607 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:36.607 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:36.607 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:36.607 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:36.607 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:36.607 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:36.607 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:36.866 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=788588 00:07:36.866 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:36.866 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 788588 00:07:36.866 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 788588 ']' 00:07:36.866 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.866 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.866 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.866 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.866 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:36.866 [2024-07-25 18:58:29.121129] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:36.866 [2024-07-25 18:58:29.121220] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.866 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.866 [2024-07-25 18:58:29.201588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.866 [2024-07-25 18:58:29.312023] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.866 [2024-07-25 18:58:29.312079] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.866 [2024-07-25 18:58:29.312118] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.866 [2024-07-25 18:58:29.312130] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.866 [2024-07-25 18:58:29.312140] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.866 [2024-07-25 18:58:29.312190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.866 [2024-07-25 18:58:29.312251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.866 [2024-07-25 18:58:29.312255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.129 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.129 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:37.129 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:37.129 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:37.129 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:37.129 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.129 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:37.129 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:37.388 [2024-07-25 18:58:29.683286] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.388 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:37.645 18:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.903 [2024-07-25 18:58:30.177657] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.903 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:38.160 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:38.418 Malloc0 00:07:38.418 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:38.674 Delay0 00:07:38.674 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.932 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:39.191 NULL1 00:07:39.191 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:39.449 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=788998 00:07:39.449 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:39.449 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:07:39.449 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.449 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.707 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.965 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:39.965 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:39.965 true 00:07:39.965 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:07:39.965 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.223 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.480 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:40.480 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:40.739 true 00:07:40.739 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:07:40.739 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.674 Read completed with error (sct=0, sc=11) 00:07:41.674 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.674 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.674 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.933 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:41.933 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:42.192 true 00:07:42.192 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:07:42.192 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.458 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.716 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:42.716 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:42.974 true 00:07:42.974 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:07:42.974 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.908 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.165 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:44.166 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:44.423 true 00:07:44.423 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:07:44.423 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.680 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.937 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:44.937 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:45.195 true 00:07:45.195 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:07:45.195 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.127 18:58:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.127 18:58:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:46.127 18:58:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:46.385 true 00:07:46.385 18:58:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:07:46.385 18:58:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.642 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.899 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:46.899 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:47.157 true 00:07:47.157 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:07:47.157 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.090 18:58:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.348 18:58:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:48.348 18:58:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:48.606 true 00:07:48.606 18:58:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:07:48.606 18:58:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.863 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.863 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:48.863 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:49.121 true 00:07:49.379 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:07:49.379 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.324 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.643 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:50.643 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:50.643 true 00:07:50.643 18:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:07:50.643 18:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.900 18:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.157 18:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:51.157 18:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:51.414 true 00:07:51.414 18:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:07:51.415 18:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.347 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.605 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:52.605 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:52.862 true 00:07:52.862 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:07:52.862 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.120 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.378 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:53.378 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:53.635 true 00:07:53.635 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:07:53.635 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.569 18:58:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.826 18:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:54.826 18:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:55.084 true 00:07:55.084 18:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:07:55.084 18:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.342 18:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.342 18:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:55.342 18:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:55.600 true 00:07:55.600 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:07:55.600 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.534 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.791 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:56.791 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:57.049 true 00:07:57.049 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:07:57.049 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.306 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.566 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:57.566 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:57.824 true 00:07:57.824 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:07:57.824 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.646 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.646 18:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:58.646 18:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:58.904 true 00:07:58.904 18:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:07:58.904 18:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.161 18:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.419 18:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:59.419 18:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:59.677 true 00:07:59.677 18:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:07:59.677 18:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.867 18:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.867 18:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:00.867 18:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:01.127 true 00:08:01.127 18:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:08:01.127 18:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.385 18:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.643 18:58:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:01.643 18:58:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:01.901 true 00:08:01.901 18:58:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:08:01.901 18:58:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.833 18:58:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.091 18:58:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:03.091 18:58:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:03.349 true 00:08:03.349 18:58:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:08:03.349 18:58:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.606 18:58:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.865 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:03.865 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:04.123 true 00:08:04.123 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:08:04.123 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.053 18:58:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.311 18:58:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:05.311 18:58:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:05.568 true 00:08:05.568 18:58:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:08:05.568 18:58:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.825 18:58:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.082 18:58:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:06.082 18:58:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:06.340 true 00:08:06.340 18:58:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:08:06.340 18:58:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.598 18:58:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.855 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:06.855 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:07.131 true 00:08:07.131 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:08:07.131 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.081 18:59:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.338 18:59:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:08.338 18:59:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:08.595 true 00:08:08.595 18:59:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:08:08.595 18:59:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.528 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.528 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:09.528 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:09.785 Initializing NVMe Controllers 00:08:09.785 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:09.785 Controller IO queue size 128, less than required. 00:08:09.785 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:09.785 Controller IO queue size 128, less than required. 00:08:09.786 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:09.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:09.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:09.786 Initialization complete. Launching workers. 00:08:09.786 ======================================================== 00:08:09.786 Latency(us) 00:08:09.786 Device Information : IOPS MiB/s Average min max 00:08:09.786 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 764.38 0.37 87771.93 2456.68 1103080.00 00:08:09.786 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11126.78 5.43 11470.82 2895.12 451484.03 00:08:09.786 ======================================================== 00:08:09.786 Total : 11891.17 5.81 16375.58 2456.68 1103080.00 00:08:09.786 00:08:09.786 true 00:08:09.786 18:59:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 788998 00:08:09.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (788998) - No such process 00:08:09.786 18:59:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 788998 00:08:09.786 18:59:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.043 18:59:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:10.300 18:59:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:10.300 18:59:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:10.300 18:59:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:10.300 18:59:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:10.300 18:59:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:10.557 null0 00:08:10.557 18:59:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:10.557 18:59:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:10.557 18:59:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:10.814 null1 00:08:10.814 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:10.814 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:10.814 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:11.072 null2 00:08:11.072 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:11.072 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:11.072 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:11.329 null3 00:08:11.329 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:11.329 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:11.329 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:11.585 null4 00:08:11.585 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:11.585 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:11.585 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:11.843 null5 00:08:11.843 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:11.843 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:11.843 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:12.101 null6 00:08:12.101 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:12.101 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:12.101 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:12.359 null7 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:12.359 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 792949 792950 792952 792954 792956 792958 792960 792962 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.360 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:12.618 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:12.618 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:12.618 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:12.618 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:12.618 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.618 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:12.618 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:12.618 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:12.876 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.876 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.876 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:12.876 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.876 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.876 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:12.876 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.876 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.876 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:12.876 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.876 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.876 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:12.876 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.876 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.876 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:12.876 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.877 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.877 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.877 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.877 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:12.877 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:12.877 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.877 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.877 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:13.134 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:13.134 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:13.134 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:13.134 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:13.134 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:13.134 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:13.134 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.134 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:13.391 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.391 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.391 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:13.391 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.391 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.391 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:13.391 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.391 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.391 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:13.391 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.391 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.391 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:13.391 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.391 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.391 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:13.391 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.391 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.391 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:13.391 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.391 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.391 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:13.391 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.391 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.391 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:13.648 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:13.648 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:13.648 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:13.648 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:13.648 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:13.648 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:13.648 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.648 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:13.907 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.908 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.908 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:13.908 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.908 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.908 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:13.908 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.908 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.908 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:13.908 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.908 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.908 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:13.908 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.908 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.908 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:13.908 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.908 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.908 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.908 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.908 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:13.908 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:13.908 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.908 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.908 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:14.166 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:14.166 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:14.166 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.166 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:14.166 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.166 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:14.166 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:14.166 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:14.423 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.423 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.423 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:14.423 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.423 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.423 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:14.423 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.423 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.423 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:14.423 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.423 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.423 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.423 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:14.423 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.423 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:14.423 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.423 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.423 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:14.423 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.423 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.423 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:14.423 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.423 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.423 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:14.681 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:14.939 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.939 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:14.939 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:14.939 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:14.939 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:14.939 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.939 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.196 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.196 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.196 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:15.196 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.196 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.196 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.196 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.196 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.196 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:15.196 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.196 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.196 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:15.196 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.196 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.196 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:15.197 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.197 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.197 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.197 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.197 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.197 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:15.197 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.197 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.197 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:15.454 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.454 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.454 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.454 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.455 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.455 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:15.455 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.455 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:15.712 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.712 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.712 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:15.712 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.712 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.712 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.712 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.712 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.712 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:15.712 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.712 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.712 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:15.712 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.712 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.712 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.712 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.712 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.712 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:15.712 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.712 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.712 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:15.712 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.712 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.712 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:15.970 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.970 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.970 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.970 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.970 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:15.970 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.970 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.970 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.228 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.228 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.228 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.228 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.228 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.228 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.228 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.228 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.228 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:16.228 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.228 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.228 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.228 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.228 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.228 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.228 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.228 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.228 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.228 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.228 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.228 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.228 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.228 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.228 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:16.485 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:16.485 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.485 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.486 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.486 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.486 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.486 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.486 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.743 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.743 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.743 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.743 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.743 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.743 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.743 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.743 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.743 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:16.743 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.743 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.743 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.743 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.743 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.743 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.743 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.743 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.743 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.743 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.743 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.743 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:16.743 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.743 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.744 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.001 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.001 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:17.001 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:17.001 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:17.001 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.001 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:17.001 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.001 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:17.258 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.258 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.258 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.258 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.258 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:17.258 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:17.258 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.258 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.258 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.259 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:17.259 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.259 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:17.259 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.259 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.259 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:17.259 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.259 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.259 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.259 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.259 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.259 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:17.259 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.259 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.259 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:17.516 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.516 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:17.516 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:17.516 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:17.516 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.516 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:17.516 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:17.517 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.774 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.774 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.774 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.774 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.774 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.774 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.774 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.774 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.774 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.774 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.774 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.774 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.774 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.774 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.774 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.774 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.774 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:17.774 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:17.774 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:17.774 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:17.774 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:17.774 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:17.774 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:17.774 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:17.775 rmmod nvme_tcp 00:08:17.775 rmmod nvme_fabrics 00:08:17.775 rmmod nvme_keyring 00:08:18.032 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:18.032 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:18.032 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:18.032 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 788588 ']' 00:08:18.032 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 788588 00:08:18.032 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 788588 ']' 00:08:18.032 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 788588 00:08:18.032 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:18.032 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:18.032 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 788588 00:08:18.032 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:18.032 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:18.032 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 788588' 00:08:18.032 killing process with pid 788588 00:08:18.032 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 788588 00:08:18.032 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 788588 00:08:18.292 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:18.292 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:18.292 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:18.292 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:18.292 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:18.292 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.292 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.292 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.199 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:20.199 00:08:20.199 real 0m46.169s 00:08:20.199 user 3m28.603s 00:08:20.199 sys 0m16.761s 00:08:20.199 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.199 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:20.199 ************************************ 00:08:20.199 END TEST nvmf_ns_hotplug_stress 00:08:20.199 ************************************ 00:08:20.199 18:59:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:20.199 18:59:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:20.199 18:59:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.199 18:59:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:20.199 ************************************ 00:08:20.199 START TEST nvmf_delete_subsystem 00:08:20.199 ************************************ 00:08:20.199 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:20.457 * Looking for test storage... 00:08:20.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.457 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.457 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:20.457 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.457 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.457 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.457 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.457 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.457 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.457 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.457 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.457 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.457 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.457 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:20.457 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:20.457 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.457 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.457 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.457 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.457 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.457 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.457 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.457 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:20.458 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:23.004 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.004 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:23.005 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:23.005 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:23.005 Found net devices under 0000:09:00.0: cvl_0_0 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:23.005 Found net devices under 0000:09:00.1: cvl_0_1 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:23.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:08:23.005 00:08:23.005 --- 10.0.0.2 ping statistics --- 00:08:23.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.005 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:08:23.005 00:08:23.005 --- 10.0.0.1 ping statistics --- 00:08:23.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.005 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.005 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:23.006 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:23.006 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.006 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:23.006 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:23.006 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.006 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:23.006 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:23.006 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:23.006 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:23.006 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:23.006 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:23.006 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=796129 00:08:23.006 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:23.006 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 796129 00:08:23.006 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 796129 ']' 00:08:23.006 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.006 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:23.006 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.006 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:23.006 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:23.264 [2024-07-25 18:59:15.480061] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:23.264 [2024-07-25 18:59:15.480188] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.264 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.264 [2024-07-25 18:59:15.556968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:23.264 [2024-07-25 18:59:15.667085] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.264 [2024-07-25 18:59:15.667147] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.264 [2024-07-25 18:59:15.667162] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.264 [2024-07-25 18:59:15.667173] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.264 [2024-07-25 18:59:15.667183] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.264 [2024-07-25 18:59:15.667232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.264 [2024-07-25 18:59:15.667237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.244 [2024-07-25 18:59:16.460311] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.244 [2024-07-25 18:59:16.476590] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.244 NULL1 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.244 Delay0 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=796283 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:24.244 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:24.244 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.244 [2024-07-25 18:59:16.551241] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:26.167 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:26.167 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.167 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 starting I/O failed: -6 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 starting I/O failed: -6 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 starting I/O failed: -6 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 starting I/O failed: -6 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 starting I/O failed: -6 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 starting I/O failed: -6 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 starting I/O failed: -6 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 starting I/O failed: -6 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 starting I/O failed: -6 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 starting I/O failed: -6 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 [2024-07-25 18:59:18.689757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd658000c00 is same with the state(5) to be set 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 [2024-07-25 18:59:18.690662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd65800d490 is same with the state(5) to be set 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 starting I/O failed: -6 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 starting I/O failed: -6 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 starting I/O failed: -6 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 starting I/O failed: -6 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 starting I/O failed: -6 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 starting I/O failed: -6 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 starting I/O failed: -6 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 starting I/O failed: -6 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 starting I/O failed: -6 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 starting I/O failed: -6 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 [2024-07-25 18:59:18.691169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b955c0 is same with the state(5) to be set 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Write completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:26.424 Read completed with error (sct=0, sc=8) 00:08:27.356 [2024-07-25 18:59:19.648518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96ac0 is same with the state(5) to be set 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 [2024-07-25 18:59:19.692275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b953e0 is same with the state(5) to be set 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 [2024-07-25 18:59:19.692507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b958f0 is same with the state(5) to be set 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Write completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.356 Read completed with error (sct=0, sc=8) 00:08:27.357 Write completed with error (sct=0, sc=8) 00:08:27.357 Write completed with error (sct=0, sc=8) 00:08:27.357 Read completed with error (sct=0, sc=8) 00:08:27.357 Read completed with error (sct=0, sc=8) 00:08:27.357 Write completed with error (sct=0, sc=8) 00:08:27.357 Read completed with error (sct=0, sc=8) 00:08:27.357 [2024-07-25 18:59:19.693621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd65800d000 is same with the state(5) to be set 00:08:27.357 Write completed with error (sct=0, sc=8) 00:08:27.357 Write completed with error (sct=0, sc=8) 00:08:27.357 Write completed with error (sct=0, sc=8) 00:08:27.357 Read completed with error (sct=0, sc=8) 00:08:27.357 Read completed with error (sct=0, sc=8) 00:08:27.357 Write completed with error (sct=0, sc=8) 00:08:27.357 Read completed with error (sct=0, sc=8) 00:08:27.357 Write completed with error (sct=0, sc=8) 00:08:27.357 Read completed with error (sct=0, sc=8) 00:08:27.357 Read completed with error (sct=0, sc=8) 00:08:27.357 Read completed with error (sct=0, sc=8) 00:08:27.357 Write completed with error (sct=0, sc=8) 00:08:27.357 Read completed with error (sct=0, sc=8) 00:08:27.357 Read completed with error (sct=0, sc=8) 00:08:27.357 Read completed with error (sct=0, sc=8) 00:08:27.357 Read completed with error (sct=0, sc=8) 00:08:27.357 Read completed with error (sct=0, sc=8) 00:08:27.357 Read completed with error (sct=0, sc=8) 00:08:27.357 Read completed with error (sct=0, sc=8) 00:08:27.357 Write completed with error (sct=0, sc=8) 00:08:27.357 Read completed with error (sct=0, sc=8) 00:08:27.357 Read completed with error (sct=0, sc=8) 00:08:27.357 Write completed with error (sct=0, sc=8) 00:08:27.357 Read completed with error (sct=0, sc=8) 00:08:27.357 Write completed with error (sct=0, sc=8) 00:08:27.357 Read completed with error (sct=0, sc=8) 00:08:27.357 Write completed with error (sct=0, sc=8) 00:08:27.357 Write completed with error (sct=0, sc=8) 00:08:27.357 Write completed with error (sct=0, sc=8) 00:08:27.357 Write completed with error (sct=0, sc=8) 00:08:27.357 Write completed with error (sct=0, sc=8) 00:08:27.357 [2024-07-25 18:59:19.693884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd65800d7c0 is same with the state(5) to be set 00:08:27.357 Initializing NVMe Controllers 00:08:27.357 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:27.357 Controller IO queue size 128, less than required. 00:08:27.357 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:27.357 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:27.357 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:27.357 Initialization complete. Launching workers. 00:08:27.357 ======================================================== 00:08:27.357 Latency(us) 00:08:27.357 Device Information : IOPS MiB/s Average min max 00:08:27.357 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.23 0.08 975501.46 433.69 2002867.53 00:08:27.357 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.74 0.08 939379.59 920.25 2005301.98 00:08:27.357 ======================================================== 00:08:27.357 Total : 327.97 0.16 957467.85 433.69 2005301.98 00:08:27.357 00:08:27.357 [2024-07-25 18:59:19.694725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b96ac0 (9): Bad file descriptor 00:08:27.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:27.357 18:59:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.357 18:59:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:27.357 18:59:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 796283 00:08:27.357 18:59:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 796283 00:08:27.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (796283) - No such process 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 796283 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 796283 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 796283 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.922 [2024-07-25 18:59:20.218157] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=796689 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 796689 00:08:27.922 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:27.922 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.922 [2024-07-25 18:59:20.283974] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:28.487 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:28.487 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 796689 00:08:28.487 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:29.053 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:29.053 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 796689 00:08:29.053 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:29.311 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:29.311 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 796689 00:08:29.311 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:29.877 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:29.877 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 796689 00:08:29.877 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:30.442 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:30.442 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 796689 00:08:30.442 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:31.008 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:31.008 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 796689 00:08:31.008 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:31.008 Initializing NVMe Controllers 00:08:31.008 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:31.008 Controller IO queue size 128, less than required. 00:08:31.008 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:31.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:31.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:31.008 Initialization complete. Launching workers. 00:08:31.008 ======================================================== 00:08:31.008 Latency(us) 00:08:31.008 Device Information : IOPS MiB/s Average min max 00:08:31.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005494.55 1000281.58 1043054.07 00:08:31.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004012.71 1000208.78 1012255.96 00:08:31.008 ======================================================== 00:08:31.008 Total : 256.00 0.12 1004753.63 1000208.78 1043054.07 00:08:31.008 00:08:31.574 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:31.574 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 796689 00:08:31.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (796689) - No such process 00:08:31.574 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 796689 00:08:31.574 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:31.574 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:31.574 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:31.574 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:31.574 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:31.574 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:31.574 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:31.574 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:31.574 rmmod nvme_tcp 00:08:31.574 rmmod nvme_fabrics 00:08:31.574 rmmod nvme_keyring 00:08:31.574 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:31.574 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:31.574 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:31.574 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 796129 ']' 00:08:31.574 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 796129 00:08:31.574 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 796129 ']' 00:08:31.574 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 796129 00:08:31.574 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:31.574 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:31.574 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 796129 00:08:31.574 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:31.574 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:31.574 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 796129' 00:08:31.574 killing process with pid 796129 00:08:31.574 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 796129 00:08:31.574 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 796129 00:08:31.834 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:31.834 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:31.834 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:31.834 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:31.834 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:31.834 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.834 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.834 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.739 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:33.739 00:08:33.739 real 0m13.496s 00:08:33.739 user 0m29.382s 00:08:33.739 sys 0m3.366s 00:08:33.739 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.739 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:33.739 ************************************ 00:08:33.739 END TEST nvmf_delete_subsystem 00:08:33.739 ************************************ 00:08:33.739 18:59:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:33.739 18:59:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:33.739 18:59:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.739 18:59:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:33.739 ************************************ 00:08:33.739 START TEST nvmf_host_management 00:08:33.739 ************************************ 00:08:33.739 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:33.998 * Looking for test storage... 00:08:33.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:33.998 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:33.998 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:33.998 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.998 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.998 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.998 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.998 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.998 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.998 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.998 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.998 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.998 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:08:33.999 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:36.529 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:36.529 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:36.529 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:36.530 Found net devices under 0000:09:00.0: cvl_0_0 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:36.530 Found net devices under 0000:09:00.1: cvl_0_1 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:36.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:08:36.530 00:08:36.530 --- 10.0.0.2 ping statistics --- 00:08:36.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.530 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:36.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:08:36.530 00:08:36.530 --- 10.0.0.1 ping statistics --- 00:08:36.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.530 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=799450 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 799450 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 799450 ']' 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:36.530 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:36.789 [2024-07-25 18:59:29.034899] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:36.789 [2024-07-25 18:59:29.034989] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.789 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.789 [2024-07-25 18:59:29.114060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:36.789 [2024-07-25 18:59:29.233688] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.789 [2024-07-25 18:59:29.233745] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.789 [2024-07-25 18:59:29.233762] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.789 [2024-07-25 18:59:29.233775] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.789 [2024-07-25 18:59:29.233787] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.789 [2024-07-25 18:59:29.233866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.789 [2024-07-25 18:59:29.233981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.789 [2024-07-25 18:59:29.234047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:36.789 [2024-07-25 18:59:29.234049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.722 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:37.722 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:37.722 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:37.722 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:37.722 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.722 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.722 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:37.722 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.722 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.722 [2024-07-25 18:59:29.985626] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.722 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.722 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:37.722 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:37.722 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.722 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:37.722 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:37.722 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:37.722 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.722 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.722 Malloc0 00:08:37.722 [2024-07-25 18:59:30.045469] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.722 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.722 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:37.722 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:37.722 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.722 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=799623 00:08:37.722 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 799623 /var/tmp/bdevperf.sock 00:08:37.722 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 799623 ']' 00:08:37.722 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:37.722 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:37.722 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:37.722 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:37.722 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:37.722 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:37.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:37.722 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:37.722 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:37.722 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:37.722 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.722 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:37.722 { 00:08:37.722 "params": { 00:08:37.722 "name": "Nvme$subsystem", 00:08:37.722 "trtype": "$TEST_TRANSPORT", 00:08:37.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:37.722 "adrfam": "ipv4", 00:08:37.722 "trsvcid": "$NVMF_PORT", 00:08:37.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:37.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:37.722 "hdgst": ${hdgst:-false}, 00:08:37.722 "ddgst": ${ddgst:-false} 00:08:37.722 }, 00:08:37.722 "method": "bdev_nvme_attach_controller" 00:08:37.722 } 00:08:37.722 EOF 00:08:37.722 )") 00:08:37.722 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:37.722 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:37.722 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:37.722 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:37.722 "params": { 00:08:37.722 "name": "Nvme0", 00:08:37.722 "trtype": "tcp", 00:08:37.722 "traddr": "10.0.0.2", 00:08:37.722 "adrfam": "ipv4", 00:08:37.722 "trsvcid": "4420", 00:08:37.722 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:37.722 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:37.722 "hdgst": false, 00:08:37.722 "ddgst": false 00:08:37.722 }, 00:08:37.722 "method": "bdev_nvme_attach_controller" 00:08:37.722 }' 00:08:37.722 [2024-07-25 18:59:30.118703] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:37.722 [2024-07-25 18:59:30.118792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid799623 ] 00:08:37.722 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.722 [2024-07-25 18:59:30.190433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.980 [2024-07-25 18:59:30.305646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.238 Running I/O for 10 seconds... 00:08:38.239 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:38.239 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:38.239 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:38.239 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.239 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.239 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.239 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:38.239 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:38.239 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:38.239 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:38.239 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:38.239 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:38.239 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:38.239 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:38.239 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:38.239 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:38.239 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.239 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.239 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.239 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=65 00:08:38.239 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 65 -ge 100 ']' 00:08:38.239 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:38.499 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:38.499 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:38.499 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:38.499 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:38.499 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.499 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.499 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.499 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=449 00:08:38.499 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 449 -ge 100 ']' 00:08:38.499 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:38.499 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:38.499 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:38.499 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:38.499 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.499 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.499 [2024-07-25 18:59:30.964518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.499 [2024-07-25 18:59:30.964584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.499 [2024-07-25 18:59:30.964600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.499 [2024-07-25 18:59:30.964613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.499 [2024-07-25 18:59:30.964625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.499 [2024-07-25 18:59:30.964638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.499 [2024-07-25 18:59:30.964651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.499 [2024-07-25 18:59:30.964664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.499 [2024-07-25 18:59:30.964676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.499 [2024-07-25 18:59:30.964698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.499 [2024-07-25 18:59:30.964711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.499 [2024-07-25 18:59:30.964723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.499 [2024-07-25 18:59:30.964736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.499 [2024-07-25 18:59:30.964748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.499 [2024-07-25 18:59:30.964760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.499 [2024-07-25 18:59:30.964771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.499 [2024-07-25 18:59:30.964784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.499 [2024-07-25 18:59:30.964796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.499 [2024-07-25 18:59:30.964808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.499 [2024-07-25 18:59:30.964820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.499 [2024-07-25 18:59:30.964832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.499 [2024-07-25 18:59:30.964844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.499 [2024-07-25 18:59:30.964856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.964868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.964880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.964891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.964903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.964915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.964927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.964939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.964951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.964963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.964975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.964987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.964999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965355] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9650 is same with the state(5) to be set 00:08:38.500 [2024-07-25 18:59:30.965766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.500 [2024-07-25 18:59:30.965812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.500 [2024-07-25 18:59:30.965845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.500 [2024-07-25 18:59:30.965861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.500 [2024-07-25 18:59:30.965879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.500 [2024-07-25 18:59:30.965893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.500 [2024-07-25 18:59:30.965909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.500 [2024-07-25 18:59:30.965923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.500 [2024-07-25 18:59:30.965940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.500 [2024-07-25 18:59:30.965954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.500 [2024-07-25 18:59:30.965970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.500 [2024-07-25 18:59:30.965984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.500 [2024-07-25 18:59:30.966000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.500 [2024-07-25 18:59:30.966014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.500 [2024-07-25 18:59:30.966030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.500 [2024-07-25 18:59:30.966044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.500 [2024-07-25 18:59:30.966060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.500 [2024-07-25 18:59:30.966074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.500 [2024-07-25 18:59:30.966089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.500 [2024-07-25 18:59:30.966114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.500 [2024-07-25 18:59:30.966132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.500 [2024-07-25 18:59:30.966147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.500 [2024-07-25 18:59:30.966163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.500 [2024-07-25 18:59:30.966176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.500 [2024-07-25 18:59:30.966192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.500 [2024-07-25 18:59:30.966206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.500 [2024-07-25 18:59:30.966226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.500 [2024-07-25 18:59:30.966241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.500 [2024-07-25 18:59:30.966256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.500 [2024-07-25 18:59:30.966271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.500 [2024-07-25 18:59:30.966287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.500 [2024-07-25 18:59:30.966301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.500 [2024-07-25 18:59:30.966316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.500 [2024-07-25 18:59:30.966332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.500 [2024-07-25 18:59:30.966347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.500 [2024-07-25 18:59:30.966362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.500 [2024-07-25 18:59:30.966377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.500 [2024-07-25 18:59:30.966391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.500 [2024-07-25 18:59:30.966411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.966426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.966441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.966455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.966472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.966486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.966501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.966515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.966530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.966544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.966559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.966573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.966589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.966606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.966622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.966636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.966651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.966665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.966681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.966694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.966710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.966724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.966740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.966754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.966770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.966783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.966799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.966813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.966829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.966843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.966859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.966873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.966888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.966902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.966917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.966931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.966946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.966960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.966979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.966994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.967010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.967024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.967039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.967053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.967068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.967082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.967097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.967119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.967135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.967160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.967175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.967189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.967204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.967218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.967234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.967248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.967263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.967277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.967292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.967307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.967322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.967336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.967352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.967369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.967386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.967407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.967422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.967436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.967452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.967467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.967482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.967496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.967511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.967525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.967540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.967554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.967569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.501 [2024-07-25 18:59:30.967583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.501 [2024-07-25 18:59:30.967598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.502 [2024-07-25 18:59:30.967612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.502 [2024-07-25 18:59:30.967628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.502 [2024-07-25 18:59:30.967642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.502 [2024-07-25 18:59:30.967657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.502 [2024-07-25 18:59:30.967671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.502 [2024-07-25 18:59:30.967686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.502 [2024-07-25 18:59:30.967700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.502 [2024-07-25 18:59:30.967715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.502 [2024-07-25 18:59:30.967730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.502 [2024-07-25 18:59:30.967748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.502 [2024-07-25 18:59:30.967763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.502 [2024-07-25 18:59:30.967777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f5a0 is same with the state(5) to be set 00:08:38.502 [2024-07-25 18:59:30.967859] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x180f5a0 was disconnected and freed. reset controller. 00:08:38.760 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.760 [2024-07-25 18:59:30.969057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:38.760 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:38.760 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.760 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.760 task offset: 57344 on job bdev=Nvme0n1 fails 00:08:38.760 00:08:38.760 Latency(us) 00:08:38.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.760 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:38.760 Job: Nvme0n1 ended in about 0.39 seconds with error 00:08:38.760 Verification LBA range: start 0x0 length 0x400 00:08:38.760 Nvme0n1 : 0.39 1160.77 72.55 165.82 0.00 46880.28 10048.85 40583.77 00:08:38.760 =================================================================================================================== 00:08:38.760 Total : 1160.77 72.55 165.82 0.00 46880.28 10048.85 40583.77 00:08:38.760 [2024-07-25 18:59:30.971510] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:38.760 [2024-07-25 18:59:30.971560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13fe790 (9): Bad file descriptor 00:08:38.760 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.760 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:38.760 [2024-07-25 18:59:30.981751] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:39.694 18:59:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 799623 00:08:39.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (799623) - No such process 00:08:39.695 18:59:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:39.695 18:59:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:39.695 18:59:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:39.695 18:59:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:39.695 18:59:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:39.695 18:59:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:39.695 18:59:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:39.695 18:59:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:39.695 { 00:08:39.695 "params": { 00:08:39.695 "name": "Nvme$subsystem", 00:08:39.695 "trtype": "$TEST_TRANSPORT", 00:08:39.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:39.695 "adrfam": "ipv4", 00:08:39.695 "trsvcid": "$NVMF_PORT", 00:08:39.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:39.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:39.695 "hdgst": ${hdgst:-false}, 00:08:39.695 "ddgst": ${ddgst:-false} 00:08:39.695 }, 00:08:39.695 "method": "bdev_nvme_attach_controller" 00:08:39.695 } 00:08:39.695 EOF 00:08:39.695 )") 00:08:39.695 18:59:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:39.695 18:59:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:39.695 18:59:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:39.695 18:59:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:39.695 "params": { 00:08:39.695 "name": "Nvme0", 00:08:39.695 "trtype": "tcp", 00:08:39.695 "traddr": "10.0.0.2", 00:08:39.695 "adrfam": "ipv4", 00:08:39.695 "trsvcid": "4420", 00:08:39.695 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:39.695 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:39.695 "hdgst": false, 00:08:39.695 "ddgst": false 00:08:39.695 }, 00:08:39.695 "method": "bdev_nvme_attach_controller" 00:08:39.695 }' 00:08:39.695 [2024-07-25 18:59:32.026025] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:39.695 [2024-07-25 18:59:32.026130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid799899 ] 00:08:39.695 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.695 [2024-07-25 18:59:32.094669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.953 [2024-07-25 18:59:32.210334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.211 Running I/O for 1 seconds... 00:08:41.144 00:08:41.144 Latency(us) 00:08:41.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.144 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:41.144 Verification LBA range: start 0x0 length 0x400 00:08:41.144 Nvme0n1 : 1.01 1077.42 67.34 0.00 0.00 58586.90 13495.56 48545.19 00:08:41.144 =================================================================================================================== 00:08:41.144 Total : 1077.42 67.34 0.00 0.00 58586.90 13495.56 48545.19 00:08:41.402 18:59:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:41.402 18:59:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:41.402 18:59:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:41.402 18:59:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:41.402 18:59:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:41.402 18:59:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:41.402 18:59:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:41.402 18:59:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:41.402 18:59:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:41.402 18:59:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:41.402 18:59:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:41.402 rmmod nvme_tcp 00:08:41.402 rmmod nvme_fabrics 00:08:41.402 rmmod nvme_keyring 00:08:41.402 18:59:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:41.402 18:59:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:41.402 18:59:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:41.402 18:59:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 799450 ']' 00:08:41.402 18:59:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 799450 00:08:41.402 18:59:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 799450 ']' 00:08:41.402 18:59:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 799450 00:08:41.402 18:59:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:41.402 18:59:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:41.402 18:59:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 799450 00:08:41.402 18:59:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:41.402 18:59:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:41.402 18:59:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 799450' 00:08:41.402 killing process with pid 799450 00:08:41.402 18:59:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 799450 00:08:41.403 18:59:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 799450 00:08:41.661 [2024-07-25 18:59:34.085791] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:41.661 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:41.661 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:41.661 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:41.661 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:41.661 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:41.661 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.661 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.661 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.208 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:44.208 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:44.208 00:08:44.208 real 0m9.960s 00:08:44.208 user 0m22.288s 00:08:44.208 sys 0m3.187s 00:08:44.208 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.208 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:44.208 ************************************ 00:08:44.208 END TEST nvmf_host_management 00:08:44.208 ************************************ 00:08:44.208 18:59:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:44.208 18:59:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:44.208 18:59:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.208 18:59:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:44.208 ************************************ 00:08:44.208 START TEST nvmf_lvol 00:08:44.208 ************************************ 00:08:44.208 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:44.208 * Looking for test storage... 00:08:44.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:44.208 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.208 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:44.208 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.208 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.208 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.208 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.208 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.208 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.208 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.208 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.208 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.208 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.208 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:08:44.209 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:46.776 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:46.776 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:08:46.776 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:46.777 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:46.777 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:46.777 Found net devices under 0000:09:00.0: cvl_0_0 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:46.777 Found net devices under 0000:09:00.1: cvl_0_1 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:46.777 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:46.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:08:46.778 00:08:46.778 --- 10.0.0.2 ping statistics --- 00:08:46.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.778 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:46.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:08:46.778 00:08:46.778 --- 10.0.0.1 ping statistics --- 00:08:46.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.778 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=802396 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 802396 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 802396 ']' 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:46.778 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:46.778 [2024-07-25 18:59:38.936964] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:46.778 [2024-07-25 18:59:38.937055] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.778 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.778 [2024-07-25 18:59:39.013565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:46.778 [2024-07-25 18:59:39.124613] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.778 [2024-07-25 18:59:39.124686] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.778 [2024-07-25 18:59:39.124715] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.778 [2024-07-25 18:59:39.124726] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.778 [2024-07-25 18:59:39.124736] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.778 [2024-07-25 18:59:39.124817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.778 [2024-07-25 18:59:39.124886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.778 [2024-07-25 18:59:39.124889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.711 18:59:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:47.711 18:59:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:47.711 18:59:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:47.711 18:59:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:47.711 18:59:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:47.711 18:59:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.711 18:59:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:47.968 [2024-07-25 18:59:40.197438] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.968 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:48.226 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:48.226 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:48.484 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:48.484 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:48.742 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:48.999 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b4e63e82-3470-4cfc-8690-066a29397efe 00:08:48.999 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b4e63e82-3470-4cfc-8690-066a29397efe lvol 20 00:08:49.257 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=bc9971d4-8b88-4fdc-bc4a-228bb343e320 00:08:49.257 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:49.515 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bc9971d4-8b88-4fdc-bc4a-228bb343e320 00:08:49.773 18:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:50.031 [2024-07-25 18:59:42.281339] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.031 18:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:50.289 18:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=802828 00:08:50.289 18:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:50.289 18:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:50.289 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.223 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot bc9971d4-8b88-4fdc-bc4a-228bb343e320 MY_SNAPSHOT 00:08:51.481 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=37267f27-b51e-4989-a041-88bc47cd9b2f 00:08:51.481 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize bc9971d4-8b88-4fdc-bc4a-228bb343e320 30 00:08:51.739 18:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 37267f27-b51e-4989-a041-88bc47cd9b2f MY_CLONE 00:08:51.997 18:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=9b6b6e89-f1b7-4a3f-ae47-b2bc475ac708 00:08:51.997 18:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 9b6b6e89-f1b7-4a3f-ae47-b2bc475ac708 00:08:52.255 18:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 802828 00:09:02.228 Initializing NVMe Controllers 00:09:02.228 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:02.228 Controller IO queue size 128, less than required. 00:09:02.228 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:02.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:02.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:02.228 Initialization complete. Launching workers. 00:09:02.228 ======================================================== 00:09:02.228 Latency(us) 00:09:02.228 Device Information : IOPS MiB/s Average min max 00:09:02.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10763.70 42.05 11898.88 407.47 88567.18 00:09:02.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10181.50 39.77 12577.71 2103.11 62362.29 00:09:02.228 ======================================================== 00:09:02.228 Total : 20945.20 81.82 12228.86 407.47 88567.18 00:09:02.228 00:09:02.228 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:02.228 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bc9971d4-8b88-4fdc-bc4a-228bb343e320 00:09:02.228 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b4e63e82-3470-4cfc-8690-066a29397efe 00:09:02.228 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:02.228 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:02.228 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:02.228 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:02.228 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:09:02.228 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:02.228 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:09:02.228 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:02.228 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:02.228 rmmod nvme_tcp 00:09:02.228 rmmod nvme_fabrics 00:09:02.228 rmmod nvme_keyring 00:09:02.228 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:02.228 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:09:02.228 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:09:02.228 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 802396 ']' 00:09:02.228 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 802396 00:09:02.228 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 802396 ']' 00:09:02.228 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 802396 00:09:02.228 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:09:02.228 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:02.228 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 802396 00:09:02.228 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:02.228 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:02.228 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 802396' 00:09:02.228 killing process with pid 802396 00:09:02.228 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 802396 00:09:02.228 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 802396 00:09:02.228 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:02.228 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:02.228 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:02.228 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:02.228 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:02.228 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.228 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:02.228 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:04.131 00:09:04.131 real 0m20.018s 00:09:04.131 user 1m6.374s 00:09:04.131 sys 0m6.290s 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:04.131 ************************************ 00:09:04.131 END TEST nvmf_lvol 00:09:04.131 ************************************ 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:04.131 ************************************ 00:09:04.131 START TEST nvmf_lvs_grow 00:09:04.131 ************************************ 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:04.131 * Looking for test storage... 00:09:04.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:09:04.131 18:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:06.661 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:06.661 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:06.661 Found net devices under 0000:09:00.0: cvl_0_0 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:06.661 Found net devices under 0000:09:00.1: cvl_0_1 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.661 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:06.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:09:06.662 00:09:06.662 --- 10.0.0.2 ping statistics --- 00:09:06.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.662 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:06.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:09:06.662 00:09:06.662 --- 10.0.0.1 ping statistics --- 00:09:06.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.662 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=806497 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 806497 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 806497 ']' 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:06.662 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:06.662 [2024-07-25 18:59:58.981039] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:06.662 [2024-07-25 18:59:58.981129] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.662 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.662 [2024-07-25 18:59:59.058484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.919 [2024-07-25 18:59:59.175761] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.919 [2024-07-25 18:59:59.175816] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.920 [2024-07-25 18:59:59.175832] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.920 [2024-07-25 18:59:59.175846] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.920 [2024-07-25 18:59:59.175857] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.920 [2024-07-25 18:59:59.175892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.484 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:07.484 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:09:07.484 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:07.484 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:07.484 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:07.484 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.484 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:07.741 [2024-07-25 19:00:00.180328] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:07.741 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:07.741 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:07.741 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.741 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:08.001 ************************************ 00:09:08.001 START TEST lvs_grow_clean 00:09:08.001 ************************************ 00:09:08.001 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:09:08.001 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:08.001 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:08.001 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:08.001 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:08.001 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:08.001 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:08.001 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:08.001 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:08.001 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:08.260 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:08.260 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:08.517 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=34c95dab-81e3-497e-836e-57103d02a271 00:09:08.517 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34c95dab-81e3-497e-836e-57103d02a271 00:09:08.517 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:08.775 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:08.775 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:08.775 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 34c95dab-81e3-497e-836e-57103d02a271 lvol 150 00:09:08.775 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=9acdce5b-d6ae-47fd-a16f-e293d41b6fdd 00:09:08.775 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.033 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:09.033 [2024-07-25 19:00:01.487335] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:09.033 [2024-07-25 19:00:01.487432] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:09.033 true 00:09:09.289 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34c95dab-81e3-497e-836e-57103d02a271 00:09:09.289 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:09.289 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:09.289 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:09.546 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9acdce5b-d6ae-47fd-a16f-e293d41b6fdd 00:09:09.804 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:10.060 [2024-07-25 19:00:02.486378] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.060 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:10.316 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=807071 00:09:10.316 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:10.316 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:10.317 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 807071 /var/tmp/bdevperf.sock 00:09:10.317 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 807071 ']' 00:09:10.317 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:10.317 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:10.317 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:10.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:10.317 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:10.317 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:10.574 [2024-07-25 19:00:02.795822] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:10.574 [2024-07-25 19:00:02.795901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid807071 ] 00:09:10.575 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.575 [2024-07-25 19:00:02.868246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.575 [2024-07-25 19:00:02.987539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.523 19:00:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:11.523 19:00:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:11.523 19:00:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:11.780 Nvme0n1 00:09:11.780 19:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:12.038 [ 00:09:12.038 { 00:09:12.038 "name": "Nvme0n1", 00:09:12.038 "aliases": [ 00:09:12.038 "9acdce5b-d6ae-47fd-a16f-e293d41b6fdd" 00:09:12.038 ], 00:09:12.038 "product_name": "NVMe disk", 00:09:12.038 "block_size": 4096, 00:09:12.038 "num_blocks": 38912, 00:09:12.038 "uuid": "9acdce5b-d6ae-47fd-a16f-e293d41b6fdd", 00:09:12.038 "assigned_rate_limits": { 00:09:12.038 "rw_ios_per_sec": 0, 00:09:12.038 "rw_mbytes_per_sec": 0, 00:09:12.038 "r_mbytes_per_sec": 0, 00:09:12.038 "w_mbytes_per_sec": 0 00:09:12.038 }, 00:09:12.038 "claimed": false, 00:09:12.038 "zoned": false, 00:09:12.038 "supported_io_types": { 00:09:12.038 "read": true, 00:09:12.038 "write": true, 00:09:12.038 "unmap": true, 00:09:12.038 "flush": true, 00:09:12.038 "reset": true, 00:09:12.038 "nvme_admin": true, 00:09:12.038 "nvme_io": true, 00:09:12.038 "nvme_io_md": false, 00:09:12.038 "write_zeroes": true, 00:09:12.038 "zcopy": false, 00:09:12.038 "get_zone_info": false, 00:09:12.038 "zone_management": false, 00:09:12.038 "zone_append": false, 00:09:12.038 "compare": true, 00:09:12.038 "compare_and_write": true, 00:09:12.038 "abort": true, 00:09:12.038 "seek_hole": false, 00:09:12.038 "seek_data": false, 00:09:12.038 "copy": true, 00:09:12.038 "nvme_iov_md": false 00:09:12.038 }, 00:09:12.038 "memory_domains": [ 00:09:12.038 { 00:09:12.038 "dma_device_id": "system", 00:09:12.038 "dma_device_type": 1 00:09:12.038 } 00:09:12.038 ], 00:09:12.038 "driver_specific": { 00:09:12.038 "nvme": [ 00:09:12.038 { 00:09:12.038 "trid": { 00:09:12.038 "trtype": "TCP", 00:09:12.038 "adrfam": "IPv4", 00:09:12.038 "traddr": "10.0.0.2", 00:09:12.038 "trsvcid": "4420", 00:09:12.038 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:12.038 }, 00:09:12.038 "ctrlr_data": { 00:09:12.038 "cntlid": 1, 00:09:12.038 "vendor_id": "0x8086", 00:09:12.038 "model_number": "SPDK bdev Controller", 00:09:12.038 "serial_number": "SPDK0", 00:09:12.038 "firmware_revision": "24.09", 00:09:12.038 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:12.038 "oacs": { 00:09:12.038 "security": 0, 00:09:12.038 "format": 0, 00:09:12.038 "firmware": 0, 00:09:12.038 "ns_manage": 0 00:09:12.038 }, 00:09:12.038 "multi_ctrlr": true, 00:09:12.038 "ana_reporting": false 00:09:12.038 }, 00:09:12.038 "vs": { 00:09:12.038 "nvme_version": "1.3" 00:09:12.038 }, 00:09:12.038 "ns_data": { 00:09:12.038 "id": 1, 00:09:12.038 "can_share": true 00:09:12.038 } 00:09:12.038 } 00:09:12.038 ], 00:09:12.038 "mp_policy": "active_passive" 00:09:12.038 } 00:09:12.038 } 00:09:12.038 ] 00:09:12.038 19:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=807228 00:09:12.038 19:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:12.038 19:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:12.296 Running I/O for 10 seconds... 00:09:13.229 Latency(us) 00:09:13.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.229 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.229 Nvme0n1 : 1.00 14023.00 54.78 0.00 0.00 0.00 0.00 0.00 00:09:13.229 =================================================================================================================== 00:09:13.229 Total : 14023.00 54.78 0.00 0.00 0.00 0.00 0.00 00:09:13.229 00:09:14.175 19:00:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 34c95dab-81e3-497e-836e-57103d02a271 00:09:14.175 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.175 Nvme0n1 : 2.00 14147.00 55.26 0.00 0.00 0.00 0.00 0.00 00:09:14.175 =================================================================================================================== 00:09:14.175 Total : 14147.00 55.26 0.00 0.00 0.00 0.00 0.00 00:09:14.175 00:09:14.433 true 00:09:14.433 19:00:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34c95dab-81e3-497e-836e-57103d02a271 00:09:14.433 19:00:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:14.691 19:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:14.691 19:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:14.691 19:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 807228 00:09:15.257 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.257 Nvme0n1 : 3.00 14231.33 55.59 0.00 0.00 0.00 0.00 0.00 00:09:15.258 =================================================================================================================== 00:09:15.258 Total : 14231.33 55.59 0.00 0.00 0.00 0.00 0.00 00:09:15.258 00:09:16.193 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.193 Nvme0n1 : 4.00 14289.50 55.82 0.00 0.00 0.00 0.00 0.00 00:09:16.193 =================================================================================================================== 00:09:16.193 Total : 14289.50 55.82 0.00 0.00 0.00 0.00 0.00 00:09:16.193 00:09:17.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.567 Nvme0n1 : 5.00 14337.20 56.00 0.00 0.00 0.00 0.00 0.00 00:09:17.567 =================================================================================================================== 00:09:17.567 Total : 14337.20 56.00 0.00 0.00 0.00 0.00 0.00 00:09:17.567 00:09:18.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.500 Nvme0n1 : 6.00 14379.83 56.17 0.00 0.00 0.00 0.00 0.00 00:09:18.500 =================================================================================================================== 00:09:18.500 Total : 14379.83 56.17 0.00 0.00 0.00 0.00 0.00 00:09:18.500 00:09:19.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.440 Nvme0n1 : 7.00 14419.14 56.32 0.00 0.00 0.00 0.00 0.00 00:09:19.440 =================================================================================================================== 00:09:19.440 Total : 14419.14 56.32 0.00 0.00 0.00 0.00 0.00 00:09:19.440 00:09:20.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.373 Nvme0n1 : 8.00 14448.88 56.44 0.00 0.00 0.00 0.00 0.00 00:09:20.373 =================================================================================================================== 00:09:20.373 Total : 14448.88 56.44 0.00 0.00 0.00 0.00 0.00 00:09:20.373 00:09:21.306 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.306 Nvme0n1 : 9.00 14478.89 56.56 0.00 0.00 0.00 0.00 0.00 00:09:21.306 =================================================================================================================== 00:09:21.306 Total : 14478.89 56.56 0.00 0.00 0.00 0.00 0.00 00:09:21.306 00:09:22.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.241 Nvme0n1 : 10.00 14509.50 56.68 0.00 0.00 0.00 0.00 0.00 00:09:22.241 =================================================================================================================== 00:09:22.241 Total : 14509.50 56.68 0.00 0.00 0.00 0.00 0.00 00:09:22.241 00:09:22.241 00:09:22.241 Latency(us) 00:09:22.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.241 Nvme0n1 : 10.01 14510.34 56.68 0.00 0.00 8815.02 2318.03 14175.19 00:09:22.241 =================================================================================================================== 00:09:22.241 Total : 14510.34 56.68 0.00 0.00 8815.02 2318.03 14175.19 00:09:22.241 0 00:09:22.241 19:00:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 807071 00:09:22.241 19:00:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 807071 ']' 00:09:22.241 19:00:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 807071 00:09:22.241 19:00:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:22.241 19:00:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:22.241 19:00:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 807071 00:09:22.241 19:00:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:22.241 19:00:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:22.241 19:00:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 807071' 00:09:22.241 killing process with pid 807071 00:09:22.241 19:00:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 807071 00:09:22.241 Received shutdown signal, test time was about 10.000000 seconds 00:09:22.241 00:09:22.241 Latency(us) 00:09:22.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.241 =================================================================================================================== 00:09:22.241 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:22.242 19:00:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 807071 00:09:22.498 19:00:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:23.062 19:00:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:23.062 19:00:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34c95dab-81e3-497e-836e-57103d02a271 00:09:23.062 19:00:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:23.319 19:00:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:23.319 19:00:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:23.319 19:00:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:23.576 [2024-07-25 19:00:15.973858] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:23.576 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34c95dab-81e3-497e-836e-57103d02a271 00:09:23.576 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:23.576 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34c95dab-81e3-497e-836e-57103d02a271 00:09:23.576 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.576 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.576 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.576 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.576 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.576 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.576 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.576 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:23.576 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34c95dab-81e3-497e-836e-57103d02a271 00:09:23.834 request: 00:09:23.834 { 00:09:23.834 "uuid": "34c95dab-81e3-497e-836e-57103d02a271", 00:09:23.834 "method": "bdev_lvol_get_lvstores", 00:09:23.834 "req_id": 1 00:09:23.834 } 00:09:23.834 Got JSON-RPC error response 00:09:23.834 response: 00:09:23.834 { 00:09:23.834 "code": -19, 00:09:23.834 "message": "No such device" 00:09:23.834 } 00:09:23.834 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:23.834 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:23.834 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:23.834 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:23.834 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:24.091 aio_bdev 00:09:24.091 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9acdce5b-d6ae-47fd-a16f-e293d41b6fdd 00:09:24.091 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=9acdce5b-d6ae-47fd-a16f-e293d41b6fdd 00:09:24.091 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:24.091 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:24.091 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:24.091 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:24.091 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:24.349 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9acdce5b-d6ae-47fd-a16f-e293d41b6fdd -t 2000 00:09:24.608 [ 00:09:24.608 { 00:09:24.608 "name": "9acdce5b-d6ae-47fd-a16f-e293d41b6fdd", 00:09:24.608 "aliases": [ 00:09:24.608 "lvs/lvol" 00:09:24.608 ], 00:09:24.608 "product_name": "Logical Volume", 00:09:24.608 "block_size": 4096, 00:09:24.608 "num_blocks": 38912, 00:09:24.608 "uuid": "9acdce5b-d6ae-47fd-a16f-e293d41b6fdd", 00:09:24.608 "assigned_rate_limits": { 00:09:24.608 "rw_ios_per_sec": 0, 00:09:24.608 "rw_mbytes_per_sec": 0, 00:09:24.608 "r_mbytes_per_sec": 0, 00:09:24.608 "w_mbytes_per_sec": 0 00:09:24.608 }, 00:09:24.608 "claimed": false, 00:09:24.608 "zoned": false, 00:09:24.608 "supported_io_types": { 00:09:24.608 "read": true, 00:09:24.608 "write": true, 00:09:24.608 "unmap": true, 00:09:24.608 "flush": false, 00:09:24.608 "reset": true, 00:09:24.608 "nvme_admin": false, 00:09:24.608 "nvme_io": false, 00:09:24.608 "nvme_io_md": false, 00:09:24.608 "write_zeroes": true, 00:09:24.608 "zcopy": false, 00:09:24.608 "get_zone_info": false, 00:09:24.608 "zone_management": false, 00:09:24.608 "zone_append": false, 00:09:24.608 "compare": false, 00:09:24.608 "compare_and_write": false, 00:09:24.608 "abort": false, 00:09:24.608 "seek_hole": true, 00:09:24.608 "seek_data": true, 00:09:24.608 "copy": false, 00:09:24.608 "nvme_iov_md": false 00:09:24.608 }, 00:09:24.608 "driver_specific": { 00:09:24.608 "lvol": { 00:09:24.608 "lvol_store_uuid": "34c95dab-81e3-497e-836e-57103d02a271", 00:09:24.608 "base_bdev": "aio_bdev", 00:09:24.608 "thin_provision": false, 00:09:24.608 "num_allocated_clusters": 38, 00:09:24.609 "snapshot": false, 00:09:24.609 "clone": false, 00:09:24.609 "esnap_clone": false 00:09:24.609 } 00:09:24.609 } 00:09:24.609 } 00:09:24.609 ] 00:09:24.609 19:00:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:24.609 19:00:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34c95dab-81e3-497e-836e-57103d02a271 00:09:24.609 19:00:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:24.867 19:00:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:24.867 19:00:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34c95dab-81e3-497e-836e-57103d02a271 00:09:24.867 19:00:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:25.124 19:00:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:25.124 19:00:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9acdce5b-d6ae-47fd-a16f-e293d41b6fdd 00:09:25.383 19:00:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 34c95dab-81e3-497e-836e-57103d02a271 00:09:25.640 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:25.898 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:25.898 00:09:25.898 real 0m18.071s 00:09:25.898 user 0m17.746s 00:09:25.898 sys 0m1.948s 00:09:25.898 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:25.898 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:25.898 ************************************ 00:09:25.898 END TEST lvs_grow_clean 00:09:25.898 ************************************ 00:09:25.898 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:25.898 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:25.898 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.898 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:25.898 ************************************ 00:09:25.898 START TEST lvs_grow_dirty 00:09:25.898 ************************************ 00:09:25.898 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:25.898 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:25.898 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:25.898 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:25.898 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:25.898 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:25.898 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:25.898 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:25.898 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:25.898 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:26.465 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:26.465 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:26.465 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c1b62aef-a7ed-4791-832c-c90f7729db07 00:09:26.465 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1b62aef-a7ed-4791-832c-c90f7729db07 00:09:26.465 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:26.724 19:00:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:26.724 19:00:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:26.724 19:00:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c1b62aef-a7ed-4791-832c-c90f7729db07 lvol 150 00:09:26.982 19:00:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=dc3e07b7-ec7d-4795-9833-be915062dd3e 00:09:26.982 19:00:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:26.982 19:00:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:27.240 [2024-07-25 19:00:19.618249] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:27.240 [2024-07-25 19:00:19.618342] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:27.240 true 00:09:27.240 19:00:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1b62aef-a7ed-4791-832c-c90f7729db07 00:09:27.240 19:00:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:27.497 19:00:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:27.497 19:00:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:27.754 19:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dc3e07b7-ec7d-4795-9833-be915062dd3e 00:09:28.012 19:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:28.270 [2024-07-25 19:00:20.605299] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.270 19:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:28.529 19:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=809767 00:09:28.529 19:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:28.529 19:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:28.529 19:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 809767 /var/tmp/bdevperf.sock 00:09:28.529 19:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 809767 ']' 00:09:28.529 19:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:28.529 19:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:28.529 19:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:28.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:28.529 19:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:28.529 19:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:28.529 [2024-07-25 19:00:20.909938] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:28.529 [2024-07-25 19:00:20.910019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid809767 ] 00:09:28.529 EAL: No free 2048 kB hugepages reported on node 1 00:09:28.529 [2024-07-25 19:00:20.979364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.787 [2024-07-25 19:00:21.098958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.787 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:28.787 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:28.787 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:29.392 Nvme0n1 00:09:29.392 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:29.651 [ 00:09:29.651 { 00:09:29.651 "name": "Nvme0n1", 00:09:29.651 "aliases": [ 00:09:29.651 "dc3e07b7-ec7d-4795-9833-be915062dd3e" 00:09:29.651 ], 00:09:29.651 "product_name": "NVMe disk", 00:09:29.651 "block_size": 4096, 00:09:29.651 "num_blocks": 38912, 00:09:29.651 "uuid": "dc3e07b7-ec7d-4795-9833-be915062dd3e", 00:09:29.651 "assigned_rate_limits": { 00:09:29.651 "rw_ios_per_sec": 0, 00:09:29.651 "rw_mbytes_per_sec": 0, 00:09:29.651 "r_mbytes_per_sec": 0, 00:09:29.651 "w_mbytes_per_sec": 0 00:09:29.651 }, 00:09:29.651 "claimed": false, 00:09:29.651 "zoned": false, 00:09:29.651 "supported_io_types": { 00:09:29.651 "read": true, 00:09:29.651 "write": true, 00:09:29.651 "unmap": true, 00:09:29.651 "flush": true, 00:09:29.651 "reset": true, 00:09:29.651 "nvme_admin": true, 00:09:29.651 "nvme_io": true, 00:09:29.651 "nvme_io_md": false, 00:09:29.651 "write_zeroes": true, 00:09:29.651 "zcopy": false, 00:09:29.651 "get_zone_info": false, 00:09:29.651 "zone_management": false, 00:09:29.651 "zone_append": false, 00:09:29.651 "compare": true, 00:09:29.651 "compare_and_write": true, 00:09:29.651 "abort": true, 00:09:29.651 "seek_hole": false, 00:09:29.651 "seek_data": false, 00:09:29.651 "copy": true, 00:09:29.651 "nvme_iov_md": false 00:09:29.651 }, 00:09:29.651 "memory_domains": [ 00:09:29.651 { 00:09:29.651 "dma_device_id": "system", 00:09:29.651 "dma_device_type": 1 00:09:29.651 } 00:09:29.651 ], 00:09:29.651 "driver_specific": { 00:09:29.651 "nvme": [ 00:09:29.651 { 00:09:29.651 "trid": { 00:09:29.651 "trtype": "TCP", 00:09:29.651 "adrfam": "IPv4", 00:09:29.651 "traddr": "10.0.0.2", 00:09:29.651 "trsvcid": "4420", 00:09:29.651 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:29.651 }, 00:09:29.651 "ctrlr_data": { 00:09:29.651 "cntlid": 1, 00:09:29.651 "vendor_id": "0x8086", 00:09:29.651 "model_number": "SPDK bdev Controller", 00:09:29.651 "serial_number": "SPDK0", 00:09:29.651 "firmware_revision": "24.09", 00:09:29.651 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:29.651 "oacs": { 00:09:29.651 "security": 0, 00:09:29.651 "format": 0, 00:09:29.651 "firmware": 0, 00:09:29.651 "ns_manage": 0 00:09:29.651 }, 00:09:29.651 "multi_ctrlr": true, 00:09:29.651 "ana_reporting": false 00:09:29.651 }, 00:09:29.651 "vs": { 00:09:29.651 "nvme_version": "1.3" 00:09:29.651 }, 00:09:29.651 "ns_data": { 00:09:29.651 "id": 1, 00:09:29.651 "can_share": true 00:09:29.651 } 00:09:29.651 } 00:09:29.651 ], 00:09:29.651 "mp_policy": "active_passive" 00:09:29.651 } 00:09:29.651 } 00:09:29.651 ] 00:09:29.651 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=809901 00:09:29.651 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:29.651 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:29.651 Running I/O for 10 seconds... 00:09:31.026 Latency(us) 00:09:31.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.026 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.026 Nvme0n1 : 1.00 14206.00 55.49 0.00 0.00 0.00 0.00 0.00 00:09:31.026 =================================================================================================================== 00:09:31.026 Total : 14206.00 55.49 0.00 0.00 0.00 0.00 0.00 00:09:31.026 00:09:31.592 19:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c1b62aef-a7ed-4791-832c-c90f7729db07 00:09:31.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.848 Nvme0n1 : 2.00 14271.00 55.75 0.00 0.00 0.00 0.00 0.00 00:09:31.848 =================================================================================================================== 00:09:31.848 Total : 14271.00 55.75 0.00 0.00 0.00 0.00 0.00 00:09:31.848 00:09:31.848 true 00:09:31.848 19:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1b62aef-a7ed-4791-832c-c90f7729db07 00:09:31.848 19:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:32.105 19:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:32.105 19:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:32.105 19:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 809901 00:09:32.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.671 Nvme0n1 : 3.00 14378.00 56.16 0.00 0.00 0.00 0.00 0.00 00:09:32.671 =================================================================================================================== 00:09:32.671 Total : 14378.00 56.16 0.00 0.00 0.00 0.00 0.00 00:09:32.671 00:09:34.046 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.046 Nvme0n1 : 4.00 14463.75 56.50 0.00 0.00 0.00 0.00 0.00 00:09:34.046 =================================================================================================================== 00:09:34.047 Total : 14463.75 56.50 0.00 0.00 0.00 0.00 0.00 00:09:34.047 00:09:34.980 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.980 Nvme0n1 : 5.00 14514.60 56.70 0.00 0.00 0.00 0.00 0.00 00:09:34.980 =================================================================================================================== 00:09:34.980 Total : 14514.60 56.70 0.00 0.00 0.00 0.00 0.00 00:09:34.980 00:09:35.913 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.913 Nvme0n1 : 6.00 14559.50 56.87 0.00 0.00 0.00 0.00 0.00 00:09:35.913 =================================================================================================================== 00:09:35.913 Total : 14559.50 56.87 0.00 0.00 0.00 0.00 0.00 00:09:35.913 00:09:36.846 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.846 Nvme0n1 : 7.00 14600.86 57.03 0.00 0.00 0.00 0.00 0.00 00:09:36.846 =================================================================================================================== 00:09:36.846 Total : 14600.86 57.03 0.00 0.00 0.00 0.00 0.00 00:09:36.846 00:09:37.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.777 Nvme0n1 : 8.00 14647.75 57.22 0.00 0.00 0.00 0.00 0.00 00:09:37.777 =================================================================================================================== 00:09:37.777 Total : 14647.75 57.22 0.00 0.00 0.00 0.00 0.00 00:09:37.777 00:09:38.709 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.709 Nvme0n1 : 9.00 14677.11 57.33 0.00 0.00 0.00 0.00 0.00 00:09:38.709 =================================================================================================================== 00:09:38.709 Total : 14677.11 57.33 0.00 0.00 0.00 0.00 0.00 00:09:38.709 00:09:40.080 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.080 Nvme0n1 : 10.00 14689.40 57.38 0.00 0.00 0.00 0.00 0.00 00:09:40.080 =================================================================================================================== 00:09:40.080 Total : 14689.40 57.38 0.00 0.00 0.00 0.00 0.00 00:09:40.080 00:09:40.080 00:09:40.080 Latency(us) 00:09:40.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.080 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.080 Nvme0n1 : 10.01 14691.71 57.39 0.00 0.00 8706.70 2342.31 16602.45 00:09:40.080 =================================================================================================================== 00:09:40.080 Total : 14691.71 57.39 0.00 0.00 8706.70 2342.31 16602.45 00:09:40.080 0 00:09:40.080 19:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 809767 00:09:40.080 19:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 809767 ']' 00:09:40.080 19:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 809767 00:09:40.080 19:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:40.080 19:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:40.080 19:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 809767 00:09:40.080 19:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:40.080 19:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:40.080 19:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 809767' 00:09:40.080 killing process with pid 809767 00:09:40.080 19:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 809767 00:09:40.080 Received shutdown signal, test time was about 10.000000 seconds 00:09:40.080 00:09:40.080 Latency(us) 00:09:40.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.080 =================================================================================================================== 00:09:40.080 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:40.080 19:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 809767 00:09:40.080 19:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:40.337 19:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:40.595 19:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1b62aef-a7ed-4791-832c-c90f7729db07 00:09:40.595 19:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:40.853 19:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:40.853 19:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:40.853 19:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 806497 00:09:40.853 19:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 806497 00:09:40.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 806497 Killed "${NVMF_APP[@]}" "$@" 00:09:40.853 19:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:40.853 19:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:40.853 19:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:40.853 19:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:40.853 19:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:40.853 19:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=811238 00:09:40.853 19:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:40.853 19:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 811238 00:09:40.853 19:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 811238 ']' 00:09:40.853 19:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.853 19:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:40.853 19:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.853 19:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:40.853 19:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:40.853 [2024-07-25 19:00:33.320517] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:40.853 [2024-07-25 19:00:33.320605] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.110 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.110 [2024-07-25 19:00:33.401959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.110 [2024-07-25 19:00:33.514903] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.110 [2024-07-25 19:00:33.514949] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.110 [2024-07-25 19:00:33.514978] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.110 [2024-07-25 19:00:33.514990] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.110 [2024-07-25 19:00:33.514999] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.110 [2024-07-25 19:00:33.515029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.043 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:42.043 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:42.043 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:42.043 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:42.043 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:42.043 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.043 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:42.301 [2024-07-25 19:00:34.575146] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:42.301 [2024-07-25 19:00:34.575289] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:42.301 [2024-07-25 19:00:34.575347] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:42.301 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:42.301 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev dc3e07b7-ec7d-4795-9833-be915062dd3e 00:09:42.301 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=dc3e07b7-ec7d-4795-9833-be915062dd3e 00:09:42.301 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:42.301 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:42.301 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:42.301 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:42.301 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:42.558 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dc3e07b7-ec7d-4795-9833-be915062dd3e -t 2000 00:09:42.816 [ 00:09:42.816 { 00:09:42.816 "name": "dc3e07b7-ec7d-4795-9833-be915062dd3e", 00:09:42.816 "aliases": [ 00:09:42.816 "lvs/lvol" 00:09:42.816 ], 00:09:42.816 "product_name": "Logical Volume", 00:09:42.816 "block_size": 4096, 00:09:42.816 "num_blocks": 38912, 00:09:42.816 "uuid": "dc3e07b7-ec7d-4795-9833-be915062dd3e", 00:09:42.816 "assigned_rate_limits": { 00:09:42.816 "rw_ios_per_sec": 0, 00:09:42.816 "rw_mbytes_per_sec": 0, 00:09:42.816 "r_mbytes_per_sec": 0, 00:09:42.816 "w_mbytes_per_sec": 0 00:09:42.816 }, 00:09:42.816 "claimed": false, 00:09:42.816 "zoned": false, 00:09:42.816 "supported_io_types": { 00:09:42.816 "read": true, 00:09:42.816 "write": true, 00:09:42.816 "unmap": true, 00:09:42.816 "flush": false, 00:09:42.816 "reset": true, 00:09:42.816 "nvme_admin": false, 00:09:42.816 "nvme_io": false, 00:09:42.816 "nvme_io_md": false, 00:09:42.816 "write_zeroes": true, 00:09:42.816 "zcopy": false, 00:09:42.816 "get_zone_info": false, 00:09:42.816 "zone_management": false, 00:09:42.816 "zone_append": false, 00:09:42.816 "compare": false, 00:09:42.816 "compare_and_write": false, 00:09:42.816 "abort": false, 00:09:42.816 "seek_hole": true, 00:09:42.816 "seek_data": true, 00:09:42.816 "copy": false, 00:09:42.816 "nvme_iov_md": false 00:09:42.816 }, 00:09:42.816 "driver_specific": { 00:09:42.816 "lvol": { 00:09:42.816 "lvol_store_uuid": "c1b62aef-a7ed-4791-832c-c90f7729db07", 00:09:42.816 "base_bdev": "aio_bdev", 00:09:42.816 "thin_provision": false, 00:09:42.816 "num_allocated_clusters": 38, 00:09:42.816 "snapshot": false, 00:09:42.816 "clone": false, 00:09:42.816 "esnap_clone": false 00:09:42.816 } 00:09:42.816 } 00:09:42.816 } 00:09:42.816 ] 00:09:42.816 19:00:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:42.816 19:00:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1b62aef-a7ed-4791-832c-c90f7729db07 00:09:42.816 19:00:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:43.074 19:00:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:43.074 19:00:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1b62aef-a7ed-4791-832c-c90f7729db07 00:09:43.074 19:00:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:43.331 19:00:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:43.331 19:00:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:43.589 [2024-07-25 19:00:35.819990] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:43.589 19:00:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1b62aef-a7ed-4791-832c-c90f7729db07 00:09:43.589 19:00:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:43.589 19:00:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1b62aef-a7ed-4791-832c-c90f7729db07 00:09:43.589 19:00:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.589 19:00:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.589 19:00:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.589 19:00:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.589 19:00:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.589 19:00:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.589 19:00:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.589 19:00:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:43.589 19:00:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1b62aef-a7ed-4791-832c-c90f7729db07 00:09:43.846 request: 00:09:43.846 { 00:09:43.846 "uuid": "c1b62aef-a7ed-4791-832c-c90f7729db07", 00:09:43.846 "method": "bdev_lvol_get_lvstores", 00:09:43.846 "req_id": 1 00:09:43.846 } 00:09:43.846 Got JSON-RPC error response 00:09:43.846 response: 00:09:43.846 { 00:09:43.846 "code": -19, 00:09:43.846 "message": "No such device" 00:09:43.846 } 00:09:43.846 19:00:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:43.846 19:00:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:43.846 19:00:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:43.846 19:00:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:43.846 19:00:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:44.104 aio_bdev 00:09:44.104 19:00:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev dc3e07b7-ec7d-4795-9833-be915062dd3e 00:09:44.104 19:00:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=dc3e07b7-ec7d-4795-9833-be915062dd3e 00:09:44.104 19:00:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:44.104 19:00:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:44.104 19:00:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:44.104 19:00:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:44.104 19:00:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:44.361 19:00:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dc3e07b7-ec7d-4795-9833-be915062dd3e -t 2000 00:09:44.618 [ 00:09:44.618 { 00:09:44.618 "name": "dc3e07b7-ec7d-4795-9833-be915062dd3e", 00:09:44.618 "aliases": [ 00:09:44.618 "lvs/lvol" 00:09:44.618 ], 00:09:44.618 "product_name": "Logical Volume", 00:09:44.618 "block_size": 4096, 00:09:44.618 "num_blocks": 38912, 00:09:44.618 "uuid": "dc3e07b7-ec7d-4795-9833-be915062dd3e", 00:09:44.618 "assigned_rate_limits": { 00:09:44.618 "rw_ios_per_sec": 0, 00:09:44.618 "rw_mbytes_per_sec": 0, 00:09:44.618 "r_mbytes_per_sec": 0, 00:09:44.618 "w_mbytes_per_sec": 0 00:09:44.618 }, 00:09:44.618 "claimed": false, 00:09:44.618 "zoned": false, 00:09:44.618 "supported_io_types": { 00:09:44.618 "read": true, 00:09:44.618 "write": true, 00:09:44.618 "unmap": true, 00:09:44.618 "flush": false, 00:09:44.618 "reset": true, 00:09:44.618 "nvme_admin": false, 00:09:44.618 "nvme_io": false, 00:09:44.618 "nvme_io_md": false, 00:09:44.618 "write_zeroes": true, 00:09:44.618 "zcopy": false, 00:09:44.618 "get_zone_info": false, 00:09:44.618 "zone_management": false, 00:09:44.618 "zone_append": false, 00:09:44.618 "compare": false, 00:09:44.618 "compare_and_write": false, 00:09:44.618 "abort": false, 00:09:44.618 "seek_hole": true, 00:09:44.618 "seek_data": true, 00:09:44.618 "copy": false, 00:09:44.618 "nvme_iov_md": false 00:09:44.618 }, 00:09:44.618 "driver_specific": { 00:09:44.618 "lvol": { 00:09:44.618 "lvol_store_uuid": "c1b62aef-a7ed-4791-832c-c90f7729db07", 00:09:44.618 "base_bdev": "aio_bdev", 00:09:44.618 "thin_provision": false, 00:09:44.618 "num_allocated_clusters": 38, 00:09:44.618 "snapshot": false, 00:09:44.618 "clone": false, 00:09:44.618 "esnap_clone": false 00:09:44.618 } 00:09:44.618 } 00:09:44.618 } 00:09:44.618 ] 00:09:44.618 19:00:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:44.618 19:00:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1b62aef-a7ed-4791-832c-c90f7729db07 00:09:44.618 19:00:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:44.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:44.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1b62aef-a7ed-4791-832c-c90f7729db07 00:09:44.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:45.131 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:45.131 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dc3e07b7-ec7d-4795-9833-be915062dd3e 00:09:45.388 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c1b62aef-a7ed-4791-832c-c90f7729db07 00:09:45.645 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:45.902 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:45.902 00:09:45.902 real 0m19.905s 00:09:45.902 user 0m49.964s 00:09:45.902 sys 0m4.772s 00:09:45.902 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.902 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:45.902 ************************************ 00:09:45.902 END TEST lvs_grow_dirty 00:09:45.902 ************************************ 00:09:45.902 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:45.902 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:45.902 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:45.902 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:45.902 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:45.902 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:45.902 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:45.902 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:45.902 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:45.902 nvmf_trace.0 00:09:45.902 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:45.902 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:45.902 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:45.902 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:45.902 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:45.902 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:45.902 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:45.902 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:45.902 rmmod nvme_tcp 00:09:45.902 rmmod nvme_fabrics 00:09:45.902 rmmod nvme_keyring 00:09:45.902 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:45.903 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:45.903 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:45.903 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 811238 ']' 00:09:45.903 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 811238 00:09:45.903 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 811238 ']' 00:09:45.903 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 811238 00:09:45.903 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:45.903 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:45.903 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 811238 00:09:46.160 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:46.160 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:46.160 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 811238' 00:09:46.160 killing process with pid 811238 00:09:46.160 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 811238 00:09:46.160 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 811238 00:09:46.418 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:46.418 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:46.418 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:46.418 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:46.418 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:46.418 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.418 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.418 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.358 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:48.358 00:09:48.358 real 0m44.424s 00:09:48.358 user 1m14.439s 00:09:48.358 sys 0m8.969s 00:09:48.358 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:48.358 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:48.358 ************************************ 00:09:48.358 END TEST nvmf_lvs_grow 00:09:48.358 ************************************ 00:09:48.358 19:00:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:48.358 19:00:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:48.358 19:00:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:48.359 ************************************ 00:09:48.359 START TEST nvmf_bdev_io_wait 00:09:48.359 ************************************ 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:48.359 * Looking for test storage... 00:09:48.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:48.359 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.890 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:50.890 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:09:50.890 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:50.890 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:50.890 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:50.890 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:50.890 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:50.890 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:09:50.890 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:50.890 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:50.891 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:50.891 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:50.891 Found net devices under 0000:09:00.0: cvl_0_0 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:50.891 Found net devices under 0000:09:00.1: cvl_0_1 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:50.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:09:50.891 00:09:50.891 --- 10.0.0.2 ping statistics --- 00:09:50.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.891 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:09:50.891 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:50.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:09:50.891 00:09:50.891 --- 10.0.0.1 ping statistics --- 00:09:50.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.892 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:09:50.892 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.892 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:09:50.892 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:50.892 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:50.892 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:50.892 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:50.892 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:50.892 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:50.892 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:51.149 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:51.149 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:51.149 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:51.149 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:51.149 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=814188 00:09:51.149 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:51.149 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 814188 00:09:51.150 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 814188 ']' 00:09:51.150 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.150 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:51.150 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.150 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:51.150 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:51.150 [2024-07-25 19:00:43.432066] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:51.150 [2024-07-25 19:00:43.432193] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.150 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.150 [2024-07-25 19:00:43.506531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:51.406 [2024-07-25 19:00:43.623052] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.406 [2024-07-25 19:00:43.623116] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.406 [2024-07-25 19:00:43.623134] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.406 [2024-07-25 19:00:43.623147] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.406 [2024-07-25 19:00:43.623168] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.406 [2024-07-25 19:00:43.623227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.406 [2024-07-25 19:00:43.623298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.406 [2024-07-25 19:00:43.623398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.406 [2024-07-25 19:00:43.623401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.971 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:51.971 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:51.971 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:51.971 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:51.971 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:51.971 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.971 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:51.971 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.971 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:51.971 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.971 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:51.971 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.971 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.230 [2024-07-25 19:00:44.487884] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.230 Malloc0 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.230 [2024-07-25 19:00:44.549546] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=814352 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=814354 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:52.230 { 00:09:52.230 "params": { 00:09:52.230 "name": "Nvme$subsystem", 00:09:52.230 "trtype": "$TEST_TRANSPORT", 00:09:52.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:52.230 "adrfam": "ipv4", 00:09:52.230 "trsvcid": "$NVMF_PORT", 00:09:52.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:52.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:52.230 "hdgst": ${hdgst:-false}, 00:09:52.230 "ddgst": ${ddgst:-false} 00:09:52.230 }, 00:09:52.230 "method": "bdev_nvme_attach_controller" 00:09:52.230 } 00:09:52.230 EOF 00:09:52.230 )") 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=814356 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:52.230 { 00:09:52.230 "params": { 00:09:52.230 "name": "Nvme$subsystem", 00:09:52.230 "trtype": "$TEST_TRANSPORT", 00:09:52.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:52.230 "adrfam": "ipv4", 00:09:52.230 "trsvcid": "$NVMF_PORT", 00:09:52.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:52.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:52.230 "hdgst": ${hdgst:-false}, 00:09:52.230 "ddgst": ${ddgst:-false} 00:09:52.230 }, 00:09:52.230 "method": "bdev_nvme_attach_controller" 00:09:52.230 } 00:09:52.230 EOF 00:09:52.230 )") 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=814359 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:52.230 { 00:09:52.230 "params": { 00:09:52.230 "name": "Nvme$subsystem", 00:09:52.230 "trtype": "$TEST_TRANSPORT", 00:09:52.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:52.230 "adrfam": "ipv4", 00:09:52.230 "trsvcid": "$NVMF_PORT", 00:09:52.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:52.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:52.230 "hdgst": ${hdgst:-false}, 00:09:52.230 "ddgst": ${ddgst:-false} 00:09:52.230 }, 00:09:52.230 "method": "bdev_nvme_attach_controller" 00:09:52.230 } 00:09:52.230 EOF 00:09:52.230 )") 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:52.230 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:52.231 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:52.231 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:52.231 { 00:09:52.231 "params": { 00:09:52.231 "name": "Nvme$subsystem", 00:09:52.231 "trtype": "$TEST_TRANSPORT", 00:09:52.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:52.231 "adrfam": "ipv4", 00:09:52.231 "trsvcid": "$NVMF_PORT", 00:09:52.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:52.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:52.231 "hdgst": ${hdgst:-false}, 00:09:52.231 "ddgst": ${ddgst:-false} 00:09:52.231 }, 00:09:52.231 "method": "bdev_nvme_attach_controller" 00:09:52.231 } 00:09:52.231 EOF 00:09:52.231 )") 00:09:52.231 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:52.231 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 814352 00:09:52.231 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:52.231 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:52.231 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:52.231 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:52.231 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:52.231 "params": { 00:09:52.231 "name": "Nvme1", 00:09:52.231 "trtype": "tcp", 00:09:52.231 "traddr": "10.0.0.2", 00:09:52.231 "adrfam": "ipv4", 00:09:52.231 "trsvcid": "4420", 00:09:52.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:52.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:52.231 "hdgst": false, 00:09:52.231 "ddgst": false 00:09:52.231 }, 00:09:52.231 "method": "bdev_nvme_attach_controller" 00:09:52.231 }' 00:09:52.231 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:52.231 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:52.231 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:52.231 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:52.231 "params": { 00:09:52.231 "name": "Nvme1", 00:09:52.231 "trtype": "tcp", 00:09:52.231 "traddr": "10.0.0.2", 00:09:52.231 "adrfam": "ipv4", 00:09:52.231 "trsvcid": "4420", 00:09:52.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:52.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:52.231 "hdgst": false, 00:09:52.231 "ddgst": false 00:09:52.231 }, 00:09:52.231 "method": "bdev_nvme_attach_controller" 00:09:52.231 }' 00:09:52.231 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:52.231 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:52.231 "params": { 00:09:52.231 "name": "Nvme1", 00:09:52.231 "trtype": "tcp", 00:09:52.231 "traddr": "10.0.0.2", 00:09:52.231 "adrfam": "ipv4", 00:09:52.231 "trsvcid": "4420", 00:09:52.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:52.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:52.231 "hdgst": false, 00:09:52.231 "ddgst": false 00:09:52.231 }, 00:09:52.231 "method": "bdev_nvme_attach_controller" 00:09:52.231 }' 00:09:52.231 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:52.231 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:52.231 "params": { 00:09:52.231 "name": "Nvme1", 00:09:52.231 "trtype": "tcp", 00:09:52.231 "traddr": "10.0.0.2", 00:09:52.231 "adrfam": "ipv4", 00:09:52.231 "trsvcid": "4420", 00:09:52.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:52.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:52.231 "hdgst": false, 00:09:52.231 "ddgst": false 00:09:52.231 }, 00:09:52.231 "method": "bdev_nvme_attach_controller" 00:09:52.231 }' 00:09:52.231 [2024-07-25 19:00:44.595980] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:52.231 [2024-07-25 19:00:44.595980] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:52.231 [2024-07-25 19:00:44.596072] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 19:00:44.596072] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:52.231 --proc-type=auto ] 00:09:52.231 [2024-07-25 19:00:44.597332] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:52.231 [2024-07-25 19:00:44.597338] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:52.231 [2024-07-25 19:00:44.597406] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 19:00:44.597407] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:52.231 --proc-type=auto ] 00:09:52.231 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.489 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.489 [2024-07-25 19:00:44.755325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.489 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.489 [2024-07-25 19:00:44.848906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:52.489 [2024-07-25 19:00:44.859290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.489 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.489 [2024-07-25 19:00:44.959811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.747 [2024-07-25 19:00:44.961750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:52.747 [2024-07-25 19:00:45.038191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.747 [2024-07-25 19:00:45.064927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:52.747 [2024-07-25 19:00:45.136884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:52.747 Running I/O for 1 seconds... 00:09:53.005 Running I/O for 1 seconds... 00:09:53.005 Running I/O for 1 seconds... 00:09:53.005 Running I/O for 1 seconds... 00:09:53.940 00:09:53.940 Latency(us) 00:09:53.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:53.940 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:53.940 Nvme1n1 : 1.02 6619.45 25.86 0.00 0.00 19134.99 9077.95 32234.00 00:09:53.940 =================================================================================================================== 00:09:53.940 Total : 6619.45 25.86 0.00 0.00 19134.99 9077.95 32234.00 00:09:53.940 00:09:53.940 Latency(us) 00:09:53.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:53.940 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:53.940 Nvme1n1 : 1.01 6534.76 25.53 0.00 0.00 19527.13 5534.15 38059.43 00:09:53.940 =================================================================================================================== 00:09:53.940 Total : 6534.76 25.53 0.00 0.00 19527.13 5534.15 38059.43 00:09:53.940 00:09:53.940 Latency(us) 00:09:53.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:53.940 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:53.940 Nvme1n1 : 1.01 9933.97 38.80 0.00 0.00 12844.71 5509.88 25631.86 00:09:53.940 =================================================================================================================== 00:09:53.940 Total : 9933.97 38.80 0.00 0.00 12844.71 5509.88 25631.86 00:09:54.198 00:09:54.198 Latency(us) 00:09:54.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.198 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:54.198 Nvme1n1 : 1.00 197686.46 772.21 0.00 0.00 644.78 324.65 788.86 00:09:54.198 =================================================================================================================== 00:09:54.198 Total : 197686.46 772.21 0.00 0.00 644.78 324.65 788.86 00:09:54.455 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 814354 00:09:54.455 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 814356 00:09:54.455 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 814359 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:54.456 rmmod nvme_tcp 00:09:54.456 rmmod nvme_fabrics 00:09:54.456 rmmod nvme_keyring 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 814188 ']' 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 814188 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 814188 ']' 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 814188 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 814188 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 814188' 00:09:54.456 killing process with pid 814188 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 814188 00:09:54.456 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 814188 00:09:54.713 19:00:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:54.713 19:00:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:54.713 19:00:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:54.713 19:00:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:54.713 19:00:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:54.713 19:00:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.713 19:00:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.713 19:00:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.248 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:57.248 00:09:57.248 real 0m8.392s 00:09:57.248 user 0m20.048s 00:09:57.248 sys 0m3.903s 00:09:57.248 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.248 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:57.248 ************************************ 00:09:57.248 END TEST nvmf_bdev_io_wait 00:09:57.248 ************************************ 00:09:57.248 19:00:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:57.248 19:00:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:57.248 19:00:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.248 19:00:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:57.248 ************************************ 00:09:57.248 START TEST nvmf_queue_depth 00:09:57.248 ************************************ 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:57.249 * Looking for test storage... 00:09:57.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:09:57.249 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:59.784 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:59.784 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:59.784 Found net devices under 0000:09:00.0: cvl_0_0 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:59.784 Found net devices under 0000:09:00.1: cvl_0_1 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:59.784 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:59.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:09:59.785 00:09:59.785 --- 10.0.0.2 ping statistics --- 00:09:59.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.785 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:59.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:09:59.785 00:09:59.785 --- 10.0.0.1 ping statistics --- 00:09:59.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.785 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=816868 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 816868 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 816868 ']' 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:59.785 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:59.785 [2024-07-25 19:00:51.878961] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:59.785 [2024-07-25 19:00:51.879052] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.785 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.785 [2024-07-25 19:00:51.952912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.785 [2024-07-25 19:00:52.065093] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.785 [2024-07-25 19:00:52.065164] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.785 [2024-07-25 19:00:52.065193] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.785 [2024-07-25 19:00:52.065205] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.785 [2024-07-25 19:00:52.065215] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.785 [2024-07-25 19:00:52.065239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.785 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:59.785 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:59.785 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:59.785 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:59.785 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:59.785 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.785 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:59.785 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.785 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:59.785 [2024-07-25 19:00:52.213010] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.785 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.785 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:59.785 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.785 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:00.044 Malloc0 00:10:00.044 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.044 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:00.044 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.044 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:00.044 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.044 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:00.044 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.044 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:00.044 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.044 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:00.044 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.044 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:00.044 [2024-07-25 19:00:52.282326] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:00.044 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.044 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=817011 00:10:00.044 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:00.044 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:00.044 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 817011 /var/tmp/bdevperf.sock 00:10:00.044 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 817011 ']' 00:10:00.044 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:00.044 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:00.044 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:00.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:00.044 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:00.044 19:00:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:00.044 [2024-07-25 19:00:52.327569] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:00.044 [2024-07-25 19:00:52.327640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid817011 ] 00:10:00.044 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.044 [2024-07-25 19:00:52.400605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.302 [2024-07-25 19:00:52.515570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.868 19:00:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:00.869 19:00:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:00.869 19:00:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:00.869 19:00:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.869 19:00:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:01.126 NVMe0n1 00:10:01.126 19:00:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.126 19:00:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:01.384 Running I/O for 10 seconds... 00:10:11.353 00:10:11.353 Latency(us) 00:10:11.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.353 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:11.353 Verification LBA range: start 0x0 length 0x4000 00:10:11.353 NVMe0n1 : 10.10 8402.94 32.82 0.00 0.00 121389.68 21554.06 73788.68 00:10:11.353 =================================================================================================================== 00:10:11.353 Total : 8402.94 32.82 0.00 0.00 121389.68 21554.06 73788.68 00:10:11.353 0 00:10:11.353 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 817011 00:10:11.353 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 817011 ']' 00:10:11.353 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 817011 00:10:11.353 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:11.353 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:11.353 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 817011 00:10:11.611 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:11.611 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:11.611 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 817011' 00:10:11.611 killing process with pid 817011 00:10:11.611 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 817011 00:10:11.611 Received shutdown signal, test time was about 10.000000 seconds 00:10:11.611 00:10:11.611 Latency(us) 00:10:11.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.611 =================================================================================================================== 00:10:11.611 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:11.611 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 817011 00:10:11.869 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:11.869 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:11.869 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:11.869 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:10:11.869 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:11.869 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:10:11.869 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:11.869 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:11.869 rmmod nvme_tcp 00:10:11.869 rmmod nvme_fabrics 00:10:11.869 rmmod nvme_keyring 00:10:11.869 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:11.869 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:10:11.869 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:10:11.869 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 816868 ']' 00:10:11.869 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 816868 00:10:11.869 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 816868 ']' 00:10:11.869 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 816868 00:10:11.869 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:11.869 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:11.869 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 816868 00:10:11.869 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:11.869 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:11.869 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 816868' 00:10:11.869 killing process with pid 816868 00:10:11.869 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 816868 00:10:11.869 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 816868 00:10:12.128 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:12.128 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:12.128 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:12.128 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:12.128 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:12.128 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.128 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.128 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.665 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:14.665 00:10:14.665 real 0m17.383s 00:10:14.665 user 0m24.654s 00:10:14.665 sys 0m3.403s 00:10:14.665 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:14.665 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:14.665 ************************************ 00:10:14.665 END TEST nvmf_queue_depth 00:10:14.665 ************************************ 00:10:14.665 19:01:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:14.665 19:01:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:14.665 19:01:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:14.665 19:01:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:14.665 ************************************ 00:10:14.665 START TEST nvmf_target_multipath 00:10:14.665 ************************************ 00:10:14.665 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:14.665 * Looking for test storage... 00:10:14.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:14.665 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:14.665 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:14.665 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.665 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.665 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:10:14.666 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:16.599 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:16.600 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:16.600 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:16.600 Found net devices under 0000:09:00.0: cvl_0_0 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:16.600 Found net devices under 0000:09:00.1: cvl_0_1 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:16.600 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:16.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:10:16.859 00:10:16.859 --- 10.0.0.2 ping statistics --- 00:10:16.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.859 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:16.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:10:16.859 00:10:16.859 --- 10.0.0.1 ping statistics --- 00:10:16.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.859 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:16.859 only one NIC for nvmf test 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:16.859 rmmod nvme_tcp 00:10:16.859 rmmod nvme_fabrics 00:10:16.859 rmmod nvme_keyring 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.859 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.398 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:19.398 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:19.398 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:19.398 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:19.398 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:19.398 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:19.398 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:19.398 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:19.398 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:19.398 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:19.398 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:19.398 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:19.398 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:19.398 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:19.398 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:19.398 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:19.398 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:19.398 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:19.398 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.398 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.398 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.398 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:19.398 00:10:19.398 real 0m4.655s 00:10:19.398 user 0m0.938s 00:10:19.398 sys 0m1.731s 00:10:19.398 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.398 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:19.398 ************************************ 00:10:19.398 END TEST nvmf_target_multipath 00:10:19.398 ************************************ 00:10:19.398 19:01:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:19.399 ************************************ 00:10:19.399 START TEST nvmf_zcopy 00:10:19.399 ************************************ 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:19.399 * Looking for test storage... 00:10:19.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:10:19.399 19:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:21.930 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:21.930 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:10:21.930 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:21.930 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:21.930 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:21.930 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:21.930 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:21.930 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:10:21.930 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:21.930 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:10:21.930 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:21.931 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:21.931 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:21.931 Found net devices under 0000:09:00.0: cvl_0_0 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:21.931 Found net devices under 0000:09:00.1: cvl_0_1 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:21.931 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:21.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:10:21.931 00:10:21.931 --- 10.0.0.2 ping statistics --- 00:10:21.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.931 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:21.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:10:21.931 00:10:21.931 --- 10.0.0.1 ping statistics --- 00:10:21.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.931 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=822916 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 822916 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 822916 ']' 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.931 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:21.932 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.932 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:21.932 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:21.932 [2024-07-25 19:01:14.186670] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:21.932 [2024-07-25 19:01:14.186748] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.932 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.932 [2024-07-25 19:01:14.259135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.932 [2024-07-25 19:01:14.378749] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.932 [2024-07-25 19:01:14.378817] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.932 [2024-07-25 19:01:14.378833] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.932 [2024-07-25 19:01:14.378847] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.932 [2024-07-25 19:01:14.378859] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.932 [2024-07-25 19:01:14.378895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:22.190 [2024-07-25 19:01:14.538083] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:22.190 [2024-07-25 19:01:14.554301] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:22.190 malloc0 00:10:22.190 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.191 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:22.191 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.191 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:22.191 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.191 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:22.191 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:22.191 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:22.191 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:22.191 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:22.191 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:22.191 { 00:10:22.191 "params": { 00:10:22.191 "name": "Nvme$subsystem", 00:10:22.191 "trtype": "$TEST_TRANSPORT", 00:10:22.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:22.191 "adrfam": "ipv4", 00:10:22.191 "trsvcid": "$NVMF_PORT", 00:10:22.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:22.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:22.191 "hdgst": ${hdgst:-false}, 00:10:22.191 "ddgst": ${ddgst:-false} 00:10:22.191 }, 00:10:22.191 "method": "bdev_nvme_attach_controller" 00:10:22.191 } 00:10:22.191 EOF 00:10:22.191 )") 00:10:22.191 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:22.191 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:22.191 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:22.191 19:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:22.191 "params": { 00:10:22.191 "name": "Nvme1", 00:10:22.191 "trtype": "tcp", 00:10:22.191 "traddr": "10.0.0.2", 00:10:22.191 "adrfam": "ipv4", 00:10:22.191 "trsvcid": "4420", 00:10:22.191 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:22.191 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:22.191 "hdgst": false, 00:10:22.191 "ddgst": false 00:10:22.191 }, 00:10:22.191 "method": "bdev_nvme_attach_controller" 00:10:22.191 }' 00:10:22.191 [2024-07-25 19:01:14.653150] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:22.191 [2024-07-25 19:01:14.653229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid822938 ] 00:10:22.449 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.449 [2024-07-25 19:01:14.733686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.449 [2024-07-25 19:01:14.854987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.015 Running I/O for 10 seconds... 00:10:32.982 00:10:32.982 Latency(us) 00:10:32.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:32.982 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:32.982 Verification LBA range: start 0x0 length 0x1000 00:10:32.982 Nvme1n1 : 10.02 5834.91 45.59 0.00 0.00 21875.17 3616.62 29321.29 00:10:32.982 =================================================================================================================== 00:10:32.982 Total : 5834.91 45.59 0.00 0.00 21875.17 3616.62 29321.29 00:10:33.240 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=824265 00:10:33.240 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:33.240 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:33.240 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:33.240 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:33.240 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:33.240 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:33.240 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:33.240 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:33.240 { 00:10:33.240 "params": { 00:10:33.240 "name": "Nvme$subsystem", 00:10:33.240 "trtype": "$TEST_TRANSPORT", 00:10:33.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:33.240 "adrfam": "ipv4", 00:10:33.240 "trsvcid": "$NVMF_PORT", 00:10:33.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:33.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:33.240 "hdgst": ${hdgst:-false}, 00:10:33.240 "ddgst": ${ddgst:-false} 00:10:33.240 }, 00:10:33.240 "method": "bdev_nvme_attach_controller" 00:10:33.240 } 00:10:33.240 EOF 00:10:33.240 )") 00:10:33.240 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:33.240 [2024-07-25 19:01:25.545194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.240 [2024-07-25 19:01:25.545239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.240 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:33.240 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:33.240 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:33.240 "params": { 00:10:33.240 "name": "Nvme1", 00:10:33.240 "trtype": "tcp", 00:10:33.240 "traddr": "10.0.0.2", 00:10:33.240 "adrfam": "ipv4", 00:10:33.240 "trsvcid": "4420", 00:10:33.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:33.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:33.240 "hdgst": false, 00:10:33.240 "ddgst": false 00:10:33.240 }, 00:10:33.240 "method": "bdev_nvme_attach_controller" 00:10:33.240 }' 00:10:33.240 [2024-07-25 19:01:25.553137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.240 [2024-07-25 19:01:25.553178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.240 [2024-07-25 19:01:25.561167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.240 [2024-07-25 19:01:25.561190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.240 [2024-07-25 19:01:25.569183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.240 [2024-07-25 19:01:25.569206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.240 [2024-07-25 19:01:25.577185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.240 [2024-07-25 19:01:25.577206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.240 [2024-07-25 19:01:25.583294] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:33.240 [2024-07-25 19:01:25.583366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid824265 ] 00:10:33.240 [2024-07-25 19:01:25.585225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.240 [2024-07-25 19:01:25.585249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.240 [2024-07-25 19:01:25.593230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.240 [2024-07-25 19:01:25.593252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.240 [2024-07-25 19:01:25.601247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.240 [2024-07-25 19:01:25.601268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.240 [2024-07-25 19:01:25.609268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.240 [2024-07-25 19:01:25.609289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.240 [2024-07-25 19:01:25.617290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.240 EAL: No free 2048 kB hugepages reported on node 1 00:10:33.240 [2024-07-25 19:01:25.617311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.240 [2024-07-25 19:01:25.625316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.240 [2024-07-25 19:01:25.625337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.240 [2024-07-25 19:01:25.633339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.240 [2024-07-25 19:01:25.633361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.240 [2024-07-25 19:01:25.641358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.240 [2024-07-25 19:01:25.641392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.240 [2024-07-25 19:01:25.649393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.240 [2024-07-25 19:01:25.649413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.240 [2024-07-25 19:01:25.656258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.240 [2024-07-25 19:01:25.657423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.240 [2024-07-25 19:01:25.657448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.240 [2024-07-25 19:01:25.665487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.240 [2024-07-25 19:01:25.665527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.240 [2024-07-25 19:01:25.673493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.240 [2024-07-25 19:01:25.673527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.240 [2024-07-25 19:01:25.681493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.240 [2024-07-25 19:01:25.681516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.240 [2024-07-25 19:01:25.689509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.240 [2024-07-25 19:01:25.689530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.240 [2024-07-25 19:01:25.697530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.240 [2024-07-25 19:01:25.697555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.240 [2024-07-25 19:01:25.705551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.240 [2024-07-25 19:01:25.705576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.499 [2024-07-25 19:01:25.713582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.499 [2024-07-25 19:01:25.713623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.499 [2024-07-25 19:01:25.721622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.499 [2024-07-25 19:01:25.721658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.499 [2024-07-25 19:01:25.729669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.499 [2024-07-25 19:01:25.729711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.499 [2024-07-25 19:01:25.737639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.499 [2024-07-25 19:01:25.737664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.499 [2024-07-25 19:01:25.745662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.499 [2024-07-25 19:01:25.745687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.499 [2024-07-25 19:01:25.753683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.499 [2024-07-25 19:01:25.753708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.499 [2024-07-25 19:01:25.761706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.499 [2024-07-25 19:01:25.761730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.499 [2024-07-25 19:01:25.769730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.499 [2024-07-25 19:01:25.769755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.499 [2024-07-25 19:01:25.776079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.499 [2024-07-25 19:01:25.777750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.499 [2024-07-25 19:01:25.777774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.499 [2024-07-25 19:01:25.785775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.499 [2024-07-25 19:01:25.785801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.499 [2024-07-25 19:01:25.793823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.499 [2024-07-25 19:01:25.793860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.499 [2024-07-25 19:01:25.801851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.499 [2024-07-25 19:01:25.801891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.499 [2024-07-25 19:01:25.809874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.499 [2024-07-25 19:01:25.809915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.499 [2024-07-25 19:01:25.817894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.499 [2024-07-25 19:01:25.817932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.499 [2024-07-25 19:01:25.825918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.499 [2024-07-25 19:01:25.825959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.499 [2024-07-25 19:01:25.833940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.499 [2024-07-25 19:01:25.833980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.499 [2024-07-25 19:01:25.841933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.499 [2024-07-25 19:01:25.841958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.499 [2024-07-25 19:01:25.849971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.499 [2024-07-25 19:01:25.850005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.499 [2024-07-25 19:01:25.858006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.499 [2024-07-25 19:01:25.858044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.499 [2024-07-25 19:01:25.866029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.499 [2024-07-25 19:01:25.866069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.499 [2024-07-25 19:01:25.874020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.499 [2024-07-25 19:01:25.874044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.499 [2024-07-25 19:01:25.882045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.499 [2024-07-25 19:01:25.882070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.499 [2024-07-25 19:01:25.890062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.499 [2024-07-25 19:01:25.890086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.499 [2024-07-25 19:01:25.898131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.499 [2024-07-25 19:01:25.898164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.500 [2024-07-25 19:01:25.906125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.500 [2024-07-25 19:01:25.906163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.500 [2024-07-25 19:01:25.914163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.500 [2024-07-25 19:01:25.914203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.500 [2024-07-25 19:01:25.922185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.500 [2024-07-25 19:01:25.922207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.500 [2024-07-25 19:01:25.930203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.500 [2024-07-25 19:01:25.930226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.500 [2024-07-25 19:01:25.938221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.500 [2024-07-25 19:01:25.938244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.500 [2024-07-25 19:01:25.946241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.500 [2024-07-25 19:01:25.946263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.500 [2024-07-25 19:01:25.954264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.500 [2024-07-25 19:01:25.954286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.500 [2024-07-25 19:01:25.962302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.500 [2024-07-25 19:01:25.962327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.758 [2024-07-25 19:01:25.970336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.758 [2024-07-25 19:01:25.970361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.758 [2024-07-25 19:01:25.978347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.758 [2024-07-25 19:01:25.978391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.758 [2024-07-25 19:01:25.986366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.758 [2024-07-25 19:01:25.986402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.758 [2024-07-25 19:01:25.995231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.758 [2024-07-25 19:01:25.995258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.758 [2024-07-25 19:01:26.002438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.758 [2024-07-25 19:01:26.002465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.758 Running I/O for 5 seconds... 00:10:33.758 [2024-07-25 19:01:26.010461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.758 [2024-07-25 19:01:26.010487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.758 [2024-07-25 19:01:26.027951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.758 [2024-07-25 19:01:26.027983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.758 [2024-07-25 19:01:26.039665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.758 [2024-07-25 19:01:26.039696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.758 [2024-07-25 19:01:26.051332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.758 [2024-07-25 19:01:26.051359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.758 [2024-07-25 19:01:26.063003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.758 [2024-07-25 19:01:26.063034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.758 [2024-07-25 19:01:26.074771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.758 [2024-07-25 19:01:26.074808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.758 [2024-07-25 19:01:26.086744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.758 [2024-07-25 19:01:26.086776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.758 [2024-07-25 19:01:26.098710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.758 [2024-07-25 19:01:26.098741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.758 [2024-07-25 19:01:26.110222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.758 [2024-07-25 19:01:26.110249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.758 [2024-07-25 19:01:26.121337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.758 [2024-07-25 19:01:26.121364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.758 [2024-07-25 19:01:26.132936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.758 [2024-07-25 19:01:26.132967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.758 [2024-07-25 19:01:26.144398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.758 [2024-07-25 19:01:26.144442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.758 [2024-07-25 19:01:26.155597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.758 [2024-07-25 19:01:26.155628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.758 [2024-07-25 19:01:26.166887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.758 [2024-07-25 19:01:26.166917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.758 [2024-07-25 19:01:26.178260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.758 [2024-07-25 19:01:26.178287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.758 [2024-07-25 19:01:26.189960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.758 [2024-07-25 19:01:26.189991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.758 [2024-07-25 19:01:26.201274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.759 [2024-07-25 19:01:26.201302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.759 [2024-07-25 19:01:26.212361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.759 [2024-07-25 19:01:26.212387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.759 [2024-07-25 19:01:26.223356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.759 [2024-07-25 19:01:26.223383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.017 [2024-07-25 19:01:26.235054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.017 [2024-07-25 19:01:26.235084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.017 [2024-07-25 19:01:26.246856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.017 [2024-07-25 19:01:26.246886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.017 [2024-07-25 19:01:26.258569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.017 [2024-07-25 19:01:26.258600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.017 [2024-07-25 19:01:26.270551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.017 [2024-07-25 19:01:26.270582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.017 [2024-07-25 19:01:26.282533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.017 [2024-07-25 19:01:26.282563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.017 [2024-07-25 19:01:26.293882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.017 [2024-07-25 19:01:26.293924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.017 [2024-07-25 19:01:26.305099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.017 [2024-07-25 19:01:26.305151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.017 [2024-07-25 19:01:26.316723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.017 [2024-07-25 19:01:26.316753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.017 [2024-07-25 19:01:26.328240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.017 [2024-07-25 19:01:26.328267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.017 [2024-07-25 19:01:26.339522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.017 [2024-07-25 19:01:26.339552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.017 [2024-07-25 19:01:26.350903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.017 [2024-07-25 19:01:26.350932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.017 [2024-07-25 19:01:26.362293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.017 [2024-07-25 19:01:26.362320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.017 [2024-07-25 19:01:26.373711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.017 [2024-07-25 19:01:26.373740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.017 [2024-07-25 19:01:26.384962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.018 [2024-07-25 19:01:26.384991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.018 [2024-07-25 19:01:26.396157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.018 [2024-07-25 19:01:26.396183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.018 [2024-07-25 19:01:26.407743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.018 [2024-07-25 19:01:26.407773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.018 [2024-07-25 19:01:26.418895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.018 [2024-07-25 19:01:26.418924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.018 [2024-07-25 19:01:26.430047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.018 [2024-07-25 19:01:26.430077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.018 [2024-07-25 19:01:26.441047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.018 [2024-07-25 19:01:26.441076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.018 [2024-07-25 19:01:26.452258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.018 [2024-07-25 19:01:26.452284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.018 [2024-07-25 19:01:26.463462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.018 [2024-07-25 19:01:26.463492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.018 [2024-07-25 19:01:26.474801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.018 [2024-07-25 19:01:26.474831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.018 [2024-07-25 19:01:26.486505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.018 [2024-07-25 19:01:26.486535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.277 [2024-07-25 19:01:26.498114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.277 [2024-07-25 19:01:26.498158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.277 [2024-07-25 19:01:26.511224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.277 [2024-07-25 19:01:26.511251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.277 [2024-07-25 19:01:26.521242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.277 [2024-07-25 19:01:26.521269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.277 [2024-07-25 19:01:26.533436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.277 [2024-07-25 19:01:26.533466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.277 [2024-07-25 19:01:26.545126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.277 [2024-07-25 19:01:26.545168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.277 [2024-07-25 19:01:26.556769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.277 [2024-07-25 19:01:26.556798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.277 [2024-07-25 19:01:26.567766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.277 [2024-07-25 19:01:26.567796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.277 [2024-07-25 19:01:26.579125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.277 [2024-07-25 19:01:26.579168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.277 [2024-07-25 19:01:26.590342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.277 [2024-07-25 19:01:26.590369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.277 [2024-07-25 19:01:26.601651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.277 [2024-07-25 19:01:26.601680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.277 [2024-07-25 19:01:26.612692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.277 [2024-07-25 19:01:26.612722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.277 [2024-07-25 19:01:26.623712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.277 [2024-07-25 19:01:26.623741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.277 [2024-07-25 19:01:26.634642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.277 [2024-07-25 19:01:26.634672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.277 [2024-07-25 19:01:26.646004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.277 [2024-07-25 19:01:26.646034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.277 [2024-07-25 19:01:26.657738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.277 [2024-07-25 19:01:26.657767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.277 [2024-07-25 19:01:26.669357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.277 [2024-07-25 19:01:26.669384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.277 [2024-07-25 19:01:26.682841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.277 [2024-07-25 19:01:26.682872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.277 [2024-07-25 19:01:26.693453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.277 [2024-07-25 19:01:26.693483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.277 [2024-07-25 19:01:26.704627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.277 [2024-07-25 19:01:26.704657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.277 [2024-07-25 19:01:26.717321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.277 [2024-07-25 19:01:26.717349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.277 [2024-07-25 19:01:26.726656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.277 [2024-07-25 19:01:26.726687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.277 [2024-07-25 19:01:26.738860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.277 [2024-07-25 19:01:26.738891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.536 [2024-07-25 19:01:26.750313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.536 [2024-07-25 19:01:26.750341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.536 [2024-07-25 19:01:26.761903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.536 [2024-07-25 19:01:26.761933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.536 [2024-07-25 19:01:26.772976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.536 [2024-07-25 19:01:26.773006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.536 [2024-07-25 19:01:26.784187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.536 [2024-07-25 19:01:26.784214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.536 [2024-07-25 19:01:26.795360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.536 [2024-07-25 19:01:26.795387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.536 [2024-07-25 19:01:26.806547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.536 [2024-07-25 19:01:26.806577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.536 [2024-07-25 19:01:26.817748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.536 [2024-07-25 19:01:26.817778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.536 [2024-07-25 19:01:26.830754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.536 [2024-07-25 19:01:26.830784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.536 [2024-07-25 19:01:26.841342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.536 [2024-07-25 19:01:26.841369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.536 [2024-07-25 19:01:26.853398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.536 [2024-07-25 19:01:26.853428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.536 [2024-07-25 19:01:26.864678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.536 [2024-07-25 19:01:26.864708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.536 [2024-07-25 19:01:26.877674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.536 [2024-07-25 19:01:26.877704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.536 [2024-07-25 19:01:26.888386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.537 [2024-07-25 19:01:26.888413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.537 [2024-07-25 19:01:26.900096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.537 [2024-07-25 19:01:26.900159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.537 [2024-07-25 19:01:26.911465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.537 [2024-07-25 19:01:26.911495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.537 [2024-07-25 19:01:26.922867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.537 [2024-07-25 19:01:26.922897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.537 [2024-07-25 19:01:26.933901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.537 [2024-07-25 19:01:26.933931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.537 [2024-07-25 19:01:26.947064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.537 [2024-07-25 19:01:26.947095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.537 [2024-07-25 19:01:26.957847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.537 [2024-07-25 19:01:26.957877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.537 [2024-07-25 19:01:26.969334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.537 [2024-07-25 19:01:26.969362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.537 [2024-07-25 19:01:26.980737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.537 [2024-07-25 19:01:26.980771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.537 [2024-07-25 19:01:26.992427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.537 [2024-07-25 19:01:26.992454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.537 [2024-07-25 19:01:27.004100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.537 [2024-07-25 19:01:27.004163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.795 [2024-07-25 19:01:27.017408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.795 [2024-07-25 19:01:27.017438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.795 [2024-07-25 19:01:27.027963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.795 [2024-07-25 19:01:27.027993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.795 [2024-07-25 19:01:27.039730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.795 [2024-07-25 19:01:27.039760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.795 [2024-07-25 19:01:27.051536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.795 [2024-07-25 19:01:27.051567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.795 [2024-07-25 19:01:27.063054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.795 [2024-07-25 19:01:27.063085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.795 [2024-07-25 19:01:27.074589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.795 [2024-07-25 19:01:27.074619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.795 [2024-07-25 19:01:27.085769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.795 [2024-07-25 19:01:27.085799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.795 [2024-07-25 19:01:27.097382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.795 [2024-07-25 19:01:27.097411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.795 [2024-07-25 19:01:27.108411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.795 [2024-07-25 19:01:27.108454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.795 [2024-07-25 19:01:27.119390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.795 [2024-07-25 19:01:27.119434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.795 [2024-07-25 19:01:27.132658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.795 [2024-07-25 19:01:27.132688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.795 [2024-07-25 19:01:27.143615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.795 [2024-07-25 19:01:27.143645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.795 [2024-07-25 19:01:27.155233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.795 [2024-07-25 19:01:27.155260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.795 [2024-07-25 19:01:27.166473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.795 [2024-07-25 19:01:27.166503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.795 [2024-07-25 19:01:27.177974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.796 [2024-07-25 19:01:27.178004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.796 [2024-07-25 19:01:27.189638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.796 [2024-07-25 19:01:27.189669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.796 [2024-07-25 19:01:27.200677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.796 [2024-07-25 19:01:27.200707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.796 [2024-07-25 19:01:27.212007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.796 [2024-07-25 19:01:27.212037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.796 [2024-07-25 19:01:27.223790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.796 [2024-07-25 19:01:27.223820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.796 [2024-07-25 19:01:27.235653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.796 [2024-07-25 19:01:27.235684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.796 [2024-07-25 19:01:27.247458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.796 [2024-07-25 19:01:27.247489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.796 [2024-07-25 19:01:27.258616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.796 [2024-07-25 19:01:27.258647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.054 [2024-07-25 19:01:27.270245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.054 [2024-07-25 19:01:27.270273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.054 [2024-07-25 19:01:27.281895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.054 [2024-07-25 19:01:27.281926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.054 [2024-07-25 19:01:27.293282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.054 [2024-07-25 19:01:27.293310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.054 [2024-07-25 19:01:27.304824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.054 [2024-07-25 19:01:27.304854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.054 [2024-07-25 19:01:27.316280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.054 [2024-07-25 19:01:27.316307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.054 [2024-07-25 19:01:27.327495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.054 [2024-07-25 19:01:27.327526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.054 [2024-07-25 19:01:27.338694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.054 [2024-07-25 19:01:27.338724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.054 [2024-07-25 19:01:27.350050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.054 [2024-07-25 19:01:27.350080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.054 [2024-07-25 19:01:27.361248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.054 [2024-07-25 19:01:27.361275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.054 [2024-07-25 19:01:27.373042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.054 [2024-07-25 19:01:27.373078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.054 [2024-07-25 19:01:27.384341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.054 [2024-07-25 19:01:27.384368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.054 [2024-07-25 19:01:27.395710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.054 [2024-07-25 19:01:27.395739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.054 [2024-07-25 19:01:27.406999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.054 [2024-07-25 19:01:27.407029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.054 [2024-07-25 19:01:27.418058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.054 [2024-07-25 19:01:27.418087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.054 [2024-07-25 19:01:27.429263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.054 [2024-07-25 19:01:27.429290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.054 [2024-07-25 19:01:27.442529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.054 [2024-07-25 19:01:27.442559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.054 [2024-07-25 19:01:27.453315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.054 [2024-07-25 19:01:27.453342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.054 [2024-07-25 19:01:27.465047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.054 [2024-07-25 19:01:27.465078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.054 [2024-07-25 19:01:27.476796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.054 [2024-07-25 19:01:27.476825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.054 [2024-07-25 19:01:27.488111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.054 [2024-07-25 19:01:27.488137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.054 [2024-07-25 19:01:27.501268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.054 [2024-07-25 19:01:27.501295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.055 [2024-07-25 19:01:27.511586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.055 [2024-07-25 19:01:27.511616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.055 [2024-07-25 19:01:27.523173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.055 [2024-07-25 19:01:27.523199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.313 [2024-07-25 19:01:27.534557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.313 [2024-07-25 19:01:27.534587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.313 [2024-07-25 19:01:27.545867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.313 [2024-07-25 19:01:27.545896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.313 [2024-07-25 19:01:27.557461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.313 [2024-07-25 19:01:27.557490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.313 [2024-07-25 19:01:27.569113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.313 [2024-07-25 19:01:27.569142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.313 [2024-07-25 19:01:27.580376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.313 [2024-07-25 19:01:27.580405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.313 [2024-07-25 19:01:27.591552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.313 [2024-07-25 19:01:27.591590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.313 [2024-07-25 19:01:27.603092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.313 [2024-07-25 19:01:27.603129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.313 [2024-07-25 19:01:27.614458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.313 [2024-07-25 19:01:27.614489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.313 [2024-07-25 19:01:27.625823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.313 [2024-07-25 19:01:27.625853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.313 [2024-07-25 19:01:27.637157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.313 [2024-07-25 19:01:27.637184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.313 [2024-07-25 19:01:27.648869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.313 [2024-07-25 19:01:27.648899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.313 [2024-07-25 19:01:27.659666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.313 [2024-07-25 19:01:27.659696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.313 [2024-07-25 19:01:27.672420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.313 [2024-07-25 19:01:27.672450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.313 [2024-07-25 19:01:27.682784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.313 [2024-07-25 19:01:27.682814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.313 [2024-07-25 19:01:27.695029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.313 [2024-07-25 19:01:27.695059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.313 [2024-07-25 19:01:27.706280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.313 [2024-07-25 19:01:27.706307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.313 [2024-07-25 19:01:27.717697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.313 [2024-07-25 19:01:27.717727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.313 [2024-07-25 19:01:27.729234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.313 [2024-07-25 19:01:27.729260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.313 [2024-07-25 19:01:27.740577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.313 [2024-07-25 19:01:27.740607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.313 [2024-07-25 19:01:27.754170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.313 [2024-07-25 19:01:27.754197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.313 [2024-07-25 19:01:27.765111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.313 [2024-07-25 19:01:27.765140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.313 [2024-07-25 19:01:27.776286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.313 [2024-07-25 19:01:27.776314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.572 [2024-07-25 19:01:27.787911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.572 [2024-07-25 19:01:27.787941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.572 [2024-07-25 19:01:27.799552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.572 [2024-07-25 19:01:27.799582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.572 [2024-07-25 19:01:27.812904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.572 [2024-07-25 19:01:27.812941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.572 [2024-07-25 19:01:27.823361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.572 [2024-07-25 19:01:27.823387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.572 [2024-07-25 19:01:27.835249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.572 [2024-07-25 19:01:27.835277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.572 [2024-07-25 19:01:27.847161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.572 [2024-07-25 19:01:27.847188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.572 [2024-07-25 19:01:27.858576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.572 [2024-07-25 19:01:27.858606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.572 [2024-07-25 19:01:27.869686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.572 [2024-07-25 19:01:27.869716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.572 [2024-07-25 19:01:27.881083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.572 [2024-07-25 19:01:27.881124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.572 [2024-07-25 19:01:27.892670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.572 [2024-07-25 19:01:27.892700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.572 [2024-07-25 19:01:27.906090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.572 [2024-07-25 19:01:27.906129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.572 [2024-07-25 19:01:27.916684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.572 [2024-07-25 19:01:27.916714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.572 [2024-07-25 19:01:27.927784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.572 [2024-07-25 19:01:27.927814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.572 [2024-07-25 19:01:27.940671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.572 [2024-07-25 19:01:27.940701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.572 [2024-07-25 19:01:27.951014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.572 [2024-07-25 19:01:27.951045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.572 [2024-07-25 19:01:27.963001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.572 [2024-07-25 19:01:27.963032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.572 [2024-07-25 19:01:27.974880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.572 [2024-07-25 19:01:27.974909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.572 [2024-07-25 19:01:27.986403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.572 [2024-07-25 19:01:27.986449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.572 [2024-07-25 19:01:27.999209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.572 [2024-07-25 19:01:27.999236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.572 [2024-07-25 19:01:28.009718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.572 [2024-07-25 19:01:28.009748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.572 [2024-07-25 19:01:28.020703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.572 [2024-07-25 19:01:28.020733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.572 [2024-07-25 19:01:28.031832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.572 [2024-07-25 19:01:28.031870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-07-25 19:01:28.043633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-07-25 19:01:28.043663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-07-25 19:01:28.055098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-07-25 19:01:28.055150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-07-25 19:01:28.066227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-07-25 19:01:28.066255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-07-25 19:01:28.077717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-07-25 19:01:28.077747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-07-25 19:01:28.089252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-07-25 19:01:28.089280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-07-25 19:01:28.100346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-07-25 19:01:28.100372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-07-25 19:01:28.111400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-07-25 19:01:28.111442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-07-25 19:01:28.122846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-07-25 19:01:28.122876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-07-25 19:01:28.134119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-07-25 19:01:28.134162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-07-25 19:01:28.145498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-07-25 19:01:28.145528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-07-25 19:01:28.156727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-07-25 19:01:28.156756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-07-25 19:01:28.167942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-07-25 19:01:28.167973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-07-25 19:01:28.181399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-07-25 19:01:28.181429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-07-25 19:01:28.191911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-07-25 19:01:28.191940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-07-25 19:01:28.203899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-07-25 19:01:28.203928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-07-25 19:01:28.215394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-07-25 19:01:28.215421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-07-25 19:01:28.228728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-07-25 19:01:28.228758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-07-25 19:01:28.239631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-07-25 19:01:28.239662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-07-25 19:01:28.251015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-07-25 19:01:28.251045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-07-25 19:01:28.262077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-07-25 19:01:28.262115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-07-25 19:01:28.273178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-07-25 19:01:28.273205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-07-25 19:01:28.284774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-07-25 19:01:28.284806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-07-25 19:01:28.296377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-07-25 19:01:28.296405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.090 [2024-07-25 19:01:28.308249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.090 [2024-07-25 19:01:28.308277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.090 [2024-07-25 19:01:28.319559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.090 [2024-07-25 19:01:28.319589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.090 [2024-07-25 19:01:28.331039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.090 [2024-07-25 19:01:28.331069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.090 [2024-07-25 19:01:28.342488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.090 [2024-07-25 19:01:28.342519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.090 [2024-07-25 19:01:28.353487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.090 [2024-07-25 19:01:28.353518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.090 [2024-07-25 19:01:28.366618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.090 [2024-07-25 19:01:28.366649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.090 [2024-07-25 19:01:28.376396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.090 [2024-07-25 19:01:28.376423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.090 [2024-07-25 19:01:28.387199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.090 [2024-07-25 19:01:28.387227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.090 [2024-07-25 19:01:28.397445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.090 [2024-07-25 19:01:28.397473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.090 [2024-07-25 19:01:28.407895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.090 [2024-07-25 19:01:28.407921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.090 [2024-07-25 19:01:28.420070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.090 [2024-07-25 19:01:28.420097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.090 [2024-07-25 19:01:28.429131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.090 [2024-07-25 19:01:28.429158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.090 [2024-07-25 19:01:28.439838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.090 [2024-07-25 19:01:28.439866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.090 [2024-07-25 19:01:28.449886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.090 [2024-07-25 19:01:28.449913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.090 [2024-07-25 19:01:28.459976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.090 [2024-07-25 19:01:28.460003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.090 [2024-07-25 19:01:28.469855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.090 [2024-07-25 19:01:28.469882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.090 [2024-07-25 19:01:28.480163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.090 [2024-07-25 19:01:28.480190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.090 [2024-07-25 19:01:28.490305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.090 [2024-07-25 19:01:28.490332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.090 [2024-07-25 19:01:28.500885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.090 [2024-07-25 19:01:28.500913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.090 [2024-07-25 19:01:28.513366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.090 [2024-07-25 19:01:28.513392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.090 [2024-07-25 19:01:28.522822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.090 [2024-07-25 19:01:28.522849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.090 [2024-07-25 19:01:28.533799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.090 [2024-07-25 19:01:28.533826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.090 [2024-07-25 19:01:28.544152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.090 [2024-07-25 19:01:28.544180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.090 [2024-07-25 19:01:28.554549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.090 [2024-07-25 19:01:28.554576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.349 [2024-07-25 19:01:28.565351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.349 [2024-07-25 19:01:28.565379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.349 [2024-07-25 19:01:28.575519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.349 [2024-07-25 19:01:28.575545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.349 [2024-07-25 19:01:28.585857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.349 [2024-07-25 19:01:28.585884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.349 [2024-07-25 19:01:28.596698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.349 [2024-07-25 19:01:28.596728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.349 [2024-07-25 19:01:28.608054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.349 [2024-07-25 19:01:28.608083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.349 [2024-07-25 19:01:28.619865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.349 [2024-07-25 19:01:28.619895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.349 [2024-07-25 19:01:28.631094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.349 [2024-07-25 19:01:28.631147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.349 [2024-07-25 19:01:28.642778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.349 [2024-07-25 19:01:28.642808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.349 [2024-07-25 19:01:28.653759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.349 [2024-07-25 19:01:28.653788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.349 [2024-07-25 19:01:28.665109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.349 [2024-07-25 19:01:28.665153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.349 [2024-07-25 19:01:28.676684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.349 [2024-07-25 19:01:28.676714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.349 [2024-07-25 19:01:28.688225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.349 [2024-07-25 19:01:28.688253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.349 [2024-07-25 19:01:28.699481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.349 [2024-07-25 19:01:28.699511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.349 [2024-07-25 19:01:28.711154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.349 [2024-07-25 19:01:28.711180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.349 [2024-07-25 19:01:28.722568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.349 [2024-07-25 19:01:28.722599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.349 [2024-07-25 19:01:28.733575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.349 [2024-07-25 19:01:28.733607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.349 [2024-07-25 19:01:28.744730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.349 [2024-07-25 19:01:28.744760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.349 [2024-07-25 19:01:28.757912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.349 [2024-07-25 19:01:28.757942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.349 [2024-07-25 19:01:28.768017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.349 [2024-07-25 19:01:28.768046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.349 [2024-07-25 19:01:28.779600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.349 [2024-07-25 19:01:28.779630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.349 [2024-07-25 19:01:28.790973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.349 [2024-07-25 19:01:28.791003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.349 [2024-07-25 19:01:28.802236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.349 [2024-07-25 19:01:28.802263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.349 [2024-07-25 19:01:28.813233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.349 [2024-07-25 19:01:28.813260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.607 [2024-07-25 19:01:28.826273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.607 [2024-07-25 19:01:28.826300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.607 [2024-07-25 19:01:28.836263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.607 [2024-07-25 19:01:28.836290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.607 [2024-07-25 19:01:28.847905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.607 [2024-07-25 19:01:28.847935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.607 [2024-07-25 19:01:28.859301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.607 [2024-07-25 19:01:28.859328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.607 [2024-07-25 19:01:28.870593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.607 [2024-07-25 19:01:28.870622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.607 [2024-07-25 19:01:28.881804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.607 [2024-07-25 19:01:28.881833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.607 [2024-07-25 19:01:28.893242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.607 [2024-07-25 19:01:28.893268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.607 [2024-07-25 19:01:28.904797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.607 [2024-07-25 19:01:28.904826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.608 [2024-07-25 19:01:28.916397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.608 [2024-07-25 19:01:28.916427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.608 [2024-07-25 19:01:28.927434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.608 [2024-07-25 19:01:28.927463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.608 [2024-07-25 19:01:28.938956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.608 [2024-07-25 19:01:28.938986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.608 [2024-07-25 19:01:28.950040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.608 [2024-07-25 19:01:28.950070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.608 [2024-07-25 19:01:28.960976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.608 [2024-07-25 19:01:28.961005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.608 [2024-07-25 19:01:28.973808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.608 [2024-07-25 19:01:28.973849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.608 [2024-07-25 19:01:28.984637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.608 [2024-07-25 19:01:28.984668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.608 [2024-07-25 19:01:28.996209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.608 [2024-07-25 19:01:28.996246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.608 [2024-07-25 19:01:29.007477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.608 [2024-07-25 19:01:29.007507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.608 [2024-07-25 19:01:29.020640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.608 [2024-07-25 19:01:29.020671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.608 [2024-07-25 19:01:29.030985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.608 [2024-07-25 19:01:29.031014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.608 [2024-07-25 19:01:29.042803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.608 [2024-07-25 19:01:29.042833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.608 [2024-07-25 19:01:29.054230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.608 [2024-07-25 19:01:29.054257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.608 [2024-07-25 19:01:29.065742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.608 [2024-07-25 19:01:29.065772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.608 [2024-07-25 19:01:29.077198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.608 [2024-07-25 19:01:29.077225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.866 [2024-07-25 19:01:29.088730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.866 [2024-07-25 19:01:29.088765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.866 [2024-07-25 19:01:29.100474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.866 [2024-07-25 19:01:29.100504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.866 [2024-07-25 19:01:29.111804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.866 [2024-07-25 19:01:29.111834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.866 [2024-07-25 19:01:29.123306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.866 [2024-07-25 19:01:29.123333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.866 [2024-07-25 19:01:29.134681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.866 [2024-07-25 19:01:29.134711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.866 [2024-07-25 19:01:29.146325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.866 [2024-07-25 19:01:29.146352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.866 [2024-07-25 19:01:29.157723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.866 [2024-07-25 19:01:29.157754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.866 [2024-07-25 19:01:29.168834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.866 [2024-07-25 19:01:29.168863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.866 [2024-07-25 19:01:29.180295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.866 [2024-07-25 19:01:29.180322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.866 [2024-07-25 19:01:29.191529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.866 [2024-07-25 19:01:29.191559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.866 [2024-07-25 19:01:29.203171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.866 [2024-07-25 19:01:29.203198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.866 [2024-07-25 19:01:29.214202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.866 [2024-07-25 19:01:29.214229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.866 [2024-07-25 19:01:29.225320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.866 [2024-07-25 19:01:29.225347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.866 [2024-07-25 19:01:29.238582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.866 [2024-07-25 19:01:29.238612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.866 [2024-07-25 19:01:29.248298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.866 [2024-07-25 19:01:29.248326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.866 [2024-07-25 19:01:29.259817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.866 [2024-07-25 19:01:29.259847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.866 [2024-07-25 19:01:29.270649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.866 [2024-07-25 19:01:29.270679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.866 [2024-07-25 19:01:29.283861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.866 [2024-07-25 19:01:29.283890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.866 [2024-07-25 19:01:29.293448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.866 [2024-07-25 19:01:29.293478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.866 [2024-07-25 19:01:29.305514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.866 [2024-07-25 19:01:29.305554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.866 [2024-07-25 19:01:29.316586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.867 [2024-07-25 19:01:29.316615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.867 [2024-07-25 19:01:29.327817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.867 [2024-07-25 19:01:29.327846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.125 [2024-07-25 19:01:29.339888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.125 [2024-07-25 19:01:29.339918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.125 [2024-07-25 19:01:29.351177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.125 [2024-07-25 19:01:29.351204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.125 [2024-07-25 19:01:29.362885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.125 [2024-07-25 19:01:29.362914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.125 [2024-07-25 19:01:29.374470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.125 [2024-07-25 19:01:29.374500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.125 [2024-07-25 19:01:29.387207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.125 [2024-07-25 19:01:29.387235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.125 [2024-07-25 19:01:29.397203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.125 [2024-07-25 19:01:29.397229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.125 [2024-07-25 19:01:29.409861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.125 [2024-07-25 19:01:29.409891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.125 [2024-07-25 19:01:29.421278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.125 [2024-07-25 19:01:29.421305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.125 [2024-07-25 19:01:29.434372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.125 [2024-07-25 19:01:29.434417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.125 [2024-07-25 19:01:29.444650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.125 [2024-07-25 19:01:29.444680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.125 [2024-07-25 19:01:29.456781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.125 [2024-07-25 19:01:29.456811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.125 [2024-07-25 19:01:29.468190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.125 [2024-07-25 19:01:29.468217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.125 [2024-07-25 19:01:29.479618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.125 [2024-07-25 19:01:29.479648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.125 [2024-07-25 19:01:29.491091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.125 [2024-07-25 19:01:29.491131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.125 [2024-07-25 19:01:29.502399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.125 [2024-07-25 19:01:29.502430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.125 [2024-07-25 19:01:29.513936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.125 [2024-07-25 19:01:29.513967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.125 [2024-07-25 19:01:29.525527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.125 [2024-07-25 19:01:29.525569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.125 [2024-07-25 19:01:29.537074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.125 [2024-07-25 19:01:29.537113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.126 [2024-07-25 19:01:29.548082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.126 [2024-07-25 19:01:29.548121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.126 [2024-07-25 19:01:29.559374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.126 [2024-07-25 19:01:29.559419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.126 [2024-07-25 19:01:29.570707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.126 [2024-07-25 19:01:29.570738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.126 [2024-07-25 19:01:29.582075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.126 [2024-07-25 19:01:29.582125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.126 [2024-07-25 19:01:29.593462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.126 [2024-07-25 19:01:29.593495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.385 [2024-07-25 19:01:29.604956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.385 [2024-07-25 19:01:29.604984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.385 [2024-07-25 19:01:29.615462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.385 [2024-07-25 19:01:29.615489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.385 [2024-07-25 19:01:29.628050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.385 [2024-07-25 19:01:29.628077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.385 [2024-07-25 19:01:29.637680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.385 [2024-07-25 19:01:29.637707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.385 [2024-07-25 19:01:29.650113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.385 [2024-07-25 19:01:29.650139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.385 [2024-07-25 19:01:29.660011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.385 [2024-07-25 19:01:29.660038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.385 [2024-07-25 19:01:29.670611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.385 [2024-07-25 19:01:29.670637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.385 [2024-07-25 19:01:29.681424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.385 [2024-07-25 19:01:29.681451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.385 [2024-07-25 19:01:29.692154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.385 [2024-07-25 19:01:29.692181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.385 [2024-07-25 19:01:29.704079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.385 [2024-07-25 19:01:29.704112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.385 [2024-07-25 19:01:29.713726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.385 [2024-07-25 19:01:29.713754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.385 [2024-07-25 19:01:29.724901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.385 [2024-07-25 19:01:29.724928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.385 [2024-07-25 19:01:29.735171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.385 [2024-07-25 19:01:29.735204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.385 [2024-07-25 19:01:29.745425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.385 [2024-07-25 19:01:29.745451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.385 [2024-07-25 19:01:29.755493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.385 [2024-07-25 19:01:29.755519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.385 [2024-07-25 19:01:29.765805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.385 [2024-07-25 19:01:29.765832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.385 [2024-07-25 19:01:29.775490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.385 [2024-07-25 19:01:29.775516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.385 [2024-07-25 19:01:29.787553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.385 [2024-07-25 19:01:29.787580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.385 [2024-07-25 19:01:29.797307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.385 [2024-07-25 19:01:29.797335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.385 [2024-07-25 19:01:29.807957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.385 [2024-07-25 19:01:29.807984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.386 [2024-07-25 19:01:29.818223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.386 [2024-07-25 19:01:29.818250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.386 [2024-07-25 19:01:29.828554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.386 [2024-07-25 19:01:29.828581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.386 [2024-07-25 19:01:29.838362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.386 [2024-07-25 19:01:29.838389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.386 [2024-07-25 19:01:29.848706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.386 [2024-07-25 19:01:29.848732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.645 [2024-07-25 19:01:29.859482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.645 [2024-07-25 19:01:29.859509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.645 [2024-07-25 19:01:29.869881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.645 [2024-07-25 19:01:29.869908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.645 [2024-07-25 19:01:29.880128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.645 [2024-07-25 19:01:29.880155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.645 [2024-07-25 19:01:29.890321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.645 [2024-07-25 19:01:29.890348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.645 [2024-07-25 19:01:29.901636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.645 [2024-07-25 19:01:29.901667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.645 [2024-07-25 19:01:29.915213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.645 [2024-07-25 19:01:29.915240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.645 [2024-07-25 19:01:29.925413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.645 [2024-07-25 19:01:29.925444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.645 [2024-07-25 19:01:29.937251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.645 [2024-07-25 19:01:29.937279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.645 [2024-07-25 19:01:29.948976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.645 [2024-07-25 19:01:29.949007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.645 [2024-07-25 19:01:29.960278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.645 [2024-07-25 19:01:29.960306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.645 [2024-07-25 19:01:29.974046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.645 [2024-07-25 19:01:29.974076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.645 [2024-07-25 19:01:29.984261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.645 [2024-07-25 19:01:29.984288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.645 [2024-07-25 19:01:29.995229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.645 [2024-07-25 19:01:29.995257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.645 [2024-07-25 19:01:30.006761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.645 [2024-07-25 19:01:30.006811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.645 [2024-07-25 19:01:30.018955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.645 [2024-07-25 19:01:30.018987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.645 [2024-07-25 19:01:30.030391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.645 [2024-07-25 19:01:30.030424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.645 [2024-07-25 19:01:30.041739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.645 [2024-07-25 19:01:30.041770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.645 [2024-07-25 19:01:30.053274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.645 [2024-07-25 19:01:30.053301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.645 [2024-07-25 19:01:30.064532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.645 [2024-07-25 19:01:30.064563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.645 [2024-07-25 19:01:30.075981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.645 [2024-07-25 19:01:30.076012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.645 [2024-07-25 19:01:30.087809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.645 [2024-07-25 19:01:30.087839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.645 [2024-07-25 19:01:30.100833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.645 [2024-07-25 19:01:30.100863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.645 [2024-07-25 19:01:30.111910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.645 [2024-07-25 19:01:30.111940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.904 [2024-07-25 19:01:30.123741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.904 [2024-07-25 19:01:30.123771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.904 [2024-07-25 19:01:30.135445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.904 [2024-07-25 19:01:30.135475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.904 [2024-07-25 19:01:30.146638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.904 [2024-07-25 19:01:30.146667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.904 [2024-07-25 19:01:30.157745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.904 [2024-07-25 19:01:30.157776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.904 [2024-07-25 19:01:30.169177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.904 [2024-07-25 19:01:30.169204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.904 [2024-07-25 19:01:30.180185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.904 [2024-07-25 19:01:30.180212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.904 [2024-07-25 19:01:30.191434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.904 [2024-07-25 19:01:30.191463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.904 [2024-07-25 19:01:30.202449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.904 [2024-07-25 19:01:30.202477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.904 [2024-07-25 19:01:30.213496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.904 [2024-07-25 19:01:30.213526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.904 [2024-07-25 19:01:30.226268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.904 [2024-07-25 19:01:30.226295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.904 [2024-07-25 19:01:30.236767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.904 [2024-07-25 19:01:30.236797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.904 [2024-07-25 19:01:30.248176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.904 [2024-07-25 19:01:30.248204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.904 [2024-07-25 19:01:30.259544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.904 [2024-07-25 19:01:30.259575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.904 [2024-07-25 19:01:30.270918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.904 [2024-07-25 19:01:30.270948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.904 [2024-07-25 19:01:30.281873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.904 [2024-07-25 19:01:30.281903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.904 [2024-07-25 19:01:30.294734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.904 [2024-07-25 19:01:30.294764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.904 [2024-07-25 19:01:30.304565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.904 [2024-07-25 19:01:30.304595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.904 [2024-07-25 19:01:30.316184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.904 [2024-07-25 19:01:30.316211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.904 [2024-07-25 19:01:30.326933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.904 [2024-07-25 19:01:30.326964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.904 [2024-07-25 19:01:30.337990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.904 [2024-07-25 19:01:30.338020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.904 [2024-07-25 19:01:30.349225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.904 [2024-07-25 19:01:30.349252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.904 [2024-07-25 19:01:30.360816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.904 [2024-07-25 19:01:30.360846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.904 [2024-07-25 19:01:30.372248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.904 [2024-07-25 19:01:30.372282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.163 [2024-07-25 19:01:30.383908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.163 [2024-07-25 19:01:30.383938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.163 [2024-07-25 19:01:30.395294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.163 [2024-07-25 19:01:30.395322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.163 [2024-07-25 19:01:30.406463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.163 [2024-07-25 19:01:30.406494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.163 [2024-07-25 19:01:30.417780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.163 [2024-07-25 19:01:30.417809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.163 [2024-07-25 19:01:30.429302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.163 [2024-07-25 19:01:30.429329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.163 [2024-07-25 19:01:30.440825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.163 [2024-07-25 19:01:30.440855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.163 [2024-07-25 19:01:30.454183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.163 [2024-07-25 19:01:30.454210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.163 [2024-07-25 19:01:30.465026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.163 [2024-07-25 19:01:30.465056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.163 [2024-07-25 19:01:30.476199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.163 [2024-07-25 19:01:30.476226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.163 [2024-07-25 19:01:30.493818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.163 [2024-07-25 19:01:30.493850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.163 [2024-07-25 19:01:30.504270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.163 [2024-07-25 19:01:30.504297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.163 [2024-07-25 19:01:30.516205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.163 [2024-07-25 19:01:30.516232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.163 [2024-07-25 19:01:30.527481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.163 [2024-07-25 19:01:30.527511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.163 [2024-07-25 19:01:30.540701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.163 [2024-07-25 19:01:30.540730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.163 [2024-07-25 19:01:30.551262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.163 [2024-07-25 19:01:30.551289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.163 [2024-07-25 19:01:30.563454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.163 [2024-07-25 19:01:30.563485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.163 [2024-07-25 19:01:30.574585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.163 [2024-07-25 19:01:30.574615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.163 [2024-07-25 19:01:30.585663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.163 [2024-07-25 19:01:30.585692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.163 [2024-07-25 19:01:30.596693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.163 [2024-07-25 19:01:30.596723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.163 [2024-07-25 19:01:30.609736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.163 [2024-07-25 19:01:30.609766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.163 [2024-07-25 19:01:30.620306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.163 [2024-07-25 19:01:30.620333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.163 [2024-07-25 19:01:30.632695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.163 [2024-07-25 19:01:30.632725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.421 [2024-07-25 19:01:30.644306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.421 [2024-07-25 19:01:30.644335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.421 [2024-07-25 19:01:30.656056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.421 [2024-07-25 19:01:30.656088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.421 [2024-07-25 19:01:30.669373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.421 [2024-07-25 19:01:30.669420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.422 [2024-07-25 19:01:30.679668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.422 [2024-07-25 19:01:30.679698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.422 [2024-07-25 19:01:30.691365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.422 [2024-07-25 19:01:30.691411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.422 [2024-07-25 19:01:30.702680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.422 [2024-07-25 19:01:30.702710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.422 [2024-07-25 19:01:30.713977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.422 [2024-07-25 19:01:30.714008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.422 [2024-07-25 19:01:30.725218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.422 [2024-07-25 19:01:30.725246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.422 [2024-07-25 19:01:30.736763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.422 [2024-07-25 19:01:30.736794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.422 [2024-07-25 19:01:30.748396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.422 [2024-07-25 19:01:30.748427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.422 [2024-07-25 19:01:30.761481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.422 [2024-07-25 19:01:30.761511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.422 [2024-07-25 19:01:30.771554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.422 [2024-07-25 19:01:30.771585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.422 [2024-07-25 19:01:30.783362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.422 [2024-07-25 19:01:30.783407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.422 [2024-07-25 19:01:30.794875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.422 [2024-07-25 19:01:30.794905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.422 [2024-07-25 19:01:30.805785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.422 [2024-07-25 19:01:30.805823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.422 [2024-07-25 19:01:30.817555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.422 [2024-07-25 19:01:30.817586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.422 [2024-07-25 19:01:30.828747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.422 [2024-07-25 19:01:30.828776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.422 [2024-07-25 19:01:30.840004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.422 [2024-07-25 19:01:30.840034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.422 [2024-07-25 19:01:30.850969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.422 [2024-07-25 19:01:30.851000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.422 [2024-07-25 19:01:30.862191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.422 [2024-07-25 19:01:30.862218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.422 [2024-07-25 19:01:30.875178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.422 [2024-07-25 19:01:30.875205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.422 [2024-07-25 19:01:30.885031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.422 [2024-07-25 19:01:30.885060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.680 [2024-07-25 19:01:30.896810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.680 [2024-07-25 19:01:30.896841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.680 [2024-07-25 19:01:30.907757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.680 [2024-07-25 19:01:30.907785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.680 [2024-07-25 19:01:30.918293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.680 [2024-07-25 19:01:30.918320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.680 [2024-07-25 19:01:30.928417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.680 [2024-07-25 19:01:30.928444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.680 [2024-07-25 19:01:30.938616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.680 [2024-07-25 19:01:30.938643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.680 [2024-07-25 19:01:30.949155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.680 [2024-07-25 19:01:30.949182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.680 [2024-07-25 19:01:30.959090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.680 [2024-07-25 19:01:30.959124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.680 [2024-07-25 19:01:30.969458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.680 [2024-07-25 19:01:30.969484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.680 [2024-07-25 19:01:30.979633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.680 [2024-07-25 19:01:30.979659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.680 [2024-07-25 19:01:30.989981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.680 [2024-07-25 19:01:30.990008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.680 [2024-07-25 19:01:31.000299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.680 [2024-07-25 19:01:31.000327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.680 [2024-07-25 19:01:31.010687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.680 [2024-07-25 19:01:31.010722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.680 [2024-07-25 19:01:31.021074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.680 [2024-07-25 19:01:31.021124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.680 [2024-07-25 19:01:31.028912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.680 [2024-07-25 19:01:31.028938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.680 00:10:38.680 Latency(us) 00:10:38.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:38.680 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:38.680 Nvme1n1 : 5.01 11320.34 88.44 0.00 0.00 11292.22 4975.88 19418.07 00:10:38.680 =================================================================================================================== 00:10:38.680 Total : 11320.34 88.44 0.00 0.00 11292.22 4975.88 19418.07 00:10:38.680 [2024-07-25 19:01:31.033206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.680 [2024-07-25 19:01:31.033234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.680 [2024-07-25 19:01:31.041295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.680 [2024-07-25 19:01:31.041322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.680 [2024-07-25 19:01:31.049308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.680 [2024-07-25 19:01:31.049330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.680 [2024-07-25 19:01:31.057384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.680 [2024-07-25 19:01:31.057429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.680 [2024-07-25 19:01:31.065406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.680 [2024-07-25 19:01:31.065449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.680 [2024-07-25 19:01:31.073429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.680 [2024-07-25 19:01:31.073473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.680 [2024-07-25 19:01:31.081449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.680 [2024-07-25 19:01:31.081494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.680 [2024-07-25 19:01:31.089473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.680 [2024-07-25 19:01:31.089520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.680 [2024-07-25 19:01:31.097525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.680 [2024-07-25 19:01:31.097574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.680 [2024-07-25 19:01:31.105516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.680 [2024-07-25 19:01:31.105559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.680 [2024-07-25 19:01:31.113538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.680 [2024-07-25 19:01:31.113583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.681 [2024-07-25 19:01:31.121561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.681 [2024-07-25 19:01:31.121604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.681 [2024-07-25 19:01:31.129592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.681 [2024-07-25 19:01:31.129637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.681 [2024-07-25 19:01:31.137610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.681 [2024-07-25 19:01:31.137668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.681 [2024-07-25 19:01:31.145625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.681 [2024-07-25 19:01:31.145669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.938 [2024-07-25 19:01:31.153661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.938 [2024-07-25 19:01:31.153712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.938 [2024-07-25 19:01:31.161675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.938 [2024-07-25 19:01:31.161721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.938 [2024-07-25 19:01:31.169686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.938 [2024-07-25 19:01:31.169730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.938 [2024-07-25 19:01:31.177676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.938 [2024-07-25 19:01:31.177706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.938 [2024-07-25 19:01:31.185674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.938 [2024-07-25 19:01:31.185694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.938 [2024-07-25 19:01:31.193698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.938 [2024-07-25 19:01:31.193720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.938 [2024-07-25 19:01:31.201741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.938 [2024-07-25 19:01:31.201762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.938 [2024-07-25 19:01:31.209756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.938 [2024-07-25 19:01:31.209782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.938 [2024-07-25 19:01:31.217824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.938 [2024-07-25 19:01:31.217869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.938 [2024-07-25 19:01:31.225850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.938 [2024-07-25 19:01:31.225895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.938 [2024-07-25 19:01:31.233824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.938 [2024-07-25 19:01:31.233852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.938 [2024-07-25 19:01:31.241844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.938 [2024-07-25 19:01:31.241871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.938 [2024-07-25 19:01:31.249864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.938 [2024-07-25 19:01:31.249889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.938 [2024-07-25 19:01:31.257886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.938 [2024-07-25 19:01:31.257911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.938 [2024-07-25 19:01:31.265907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.938 [2024-07-25 19:01:31.265931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.938 [2024-07-25 19:01:31.274006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.938 [2024-07-25 19:01:31.274057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.938 [2024-07-25 19:01:31.281987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.938 [2024-07-25 19:01:31.282032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.938 [2024-07-25 19:01:31.289986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.938 [2024-07-25 19:01:31.290024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.938 [2024-07-25 19:01:31.297999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.938 [2024-07-25 19:01:31.298024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.938 [2024-07-25 19:01:31.306018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.938 [2024-07-25 19:01:31.306043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (824265) - No such process 00:10:38.938 19:01:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 824265 00:10:38.938 19:01:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.938 19:01:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.938 19:01:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:38.938 19:01:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.938 19:01:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:38.938 19:01:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.938 19:01:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:38.938 delay0 00:10:38.938 19:01:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.939 19:01:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:38.939 19:01:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.939 19:01:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:38.939 19:01:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.939 19:01:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:38.939 EAL: No free 2048 kB hugepages reported on node 1 00:10:38.939 [2024-07-25 19:01:31.391755] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:45.561 Initializing NVMe Controllers 00:10:45.561 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:45.561 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:45.561 Initialization complete. Launching workers. 00:10:45.561 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 136 00:10:45.561 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 423, failed to submit 33 00:10:45.561 success 239, unsuccess 184, failed 0 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:45.561 rmmod nvme_tcp 00:10:45.561 rmmod nvme_fabrics 00:10:45.561 rmmod nvme_keyring 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 822916 ']' 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 822916 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 822916 ']' 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 822916 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 822916 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 822916' 00:10:45.561 killing process with pid 822916 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 822916 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 822916 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.561 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.099 19:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:48.099 00:10:48.099 real 0m28.628s 00:10:48.099 user 0m41.583s 00:10:48.099 sys 0m8.810s 00:10:48.099 19:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.099 19:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:48.099 ************************************ 00:10:48.099 END TEST nvmf_zcopy 00:10:48.099 ************************************ 00:10:48.099 19:01:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:48.099 19:01:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:48.099 19:01:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.099 19:01:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:48.099 ************************************ 00:10:48.099 START TEST nvmf_nmic 00:10:48.099 ************************************ 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:48.099 * Looking for test storage... 00:10:48.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:10:48.099 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:50.633 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:50.633 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:50.633 Found net devices under 0000:09:00.0: cvl_0_0 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:50.633 Found net devices under 0000:09:00.1: cvl_0_1 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.633 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:50.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:10:50.634 00:10:50.634 --- 10.0.0.2 ping statistics --- 00:10:50.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.634 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:10:50.634 00:10:50.634 --- 10.0.0.1 ping statistics --- 00:10:50.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.634 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=827956 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 827956 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 827956 ']' 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:50.634 19:01:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.634 [2024-07-25 19:01:42.821880] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:50.634 [2024-07-25 19:01:42.821951] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.634 EAL: No free 2048 kB hugepages reported on node 1 00:10:50.634 [2024-07-25 19:01:42.899513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.634 [2024-07-25 19:01:43.017060] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.634 [2024-07-25 19:01:43.017136] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.634 [2024-07-25 19:01:43.017155] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.634 [2024-07-25 19:01:43.017190] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.634 [2024-07-25 19:01:43.017203] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.634 [2024-07-25 19:01:43.021131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.634 [2024-07-25 19:01:43.021184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.634 [2024-07-25 19:01:43.021260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.634 [2024-07-25 19:01:43.021264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.567 [2024-07-25 19:01:43.860933] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.567 Malloc0 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.567 [2024-07-25 19:01:43.915284] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:51.567 test case1: single bdev can't be used in multiple subsystems 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.567 [2024-07-25 19:01:43.939132] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:51.567 [2024-07-25 19:01:43.939176] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:51.567 [2024-07-25 19:01:43.939191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.567 request: 00:10:51.567 { 00:10:51.567 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:51.567 "namespace": { 00:10:51.567 "bdev_name": "Malloc0", 00:10:51.567 "no_auto_visible": false 00:10:51.567 }, 00:10:51.567 "method": "nvmf_subsystem_add_ns", 00:10:51.567 "req_id": 1 00:10:51.567 } 00:10:51.567 Got JSON-RPC error response 00:10:51.567 response: 00:10:51.567 { 00:10:51.567 "code": -32602, 00:10:51.567 "message": "Invalid parameters" 00:10:51.567 } 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:51.567 Adding namespace failed - expected result. 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:51.567 test case2: host connect to nvmf target in multiple paths 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.567 [2024-07-25 19:01:43.947272] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.567 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:52.501 19:01:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:53.067 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:53.067 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:53.067 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:53.067 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:53.067 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:54.964 19:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:54.964 19:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:54.964 19:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:54.964 19:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:54.964 19:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:54.964 19:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:54.964 19:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:54.964 [global] 00:10:54.964 thread=1 00:10:54.964 invalidate=1 00:10:54.964 rw=write 00:10:54.964 time_based=1 00:10:54.964 runtime=1 00:10:54.964 ioengine=libaio 00:10:54.964 direct=1 00:10:54.964 bs=4096 00:10:54.964 iodepth=1 00:10:54.964 norandommap=0 00:10:54.964 numjobs=1 00:10:54.964 00:10:54.964 verify_dump=1 00:10:54.964 verify_backlog=512 00:10:54.964 verify_state_save=0 00:10:54.964 do_verify=1 00:10:54.964 verify=crc32c-intel 00:10:54.964 [job0] 00:10:54.964 filename=/dev/nvme0n1 00:10:54.964 Could not set queue depth (nvme0n1) 00:10:55.221 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.221 fio-3.35 00:10:55.221 Starting 1 thread 00:10:56.594 00:10:56.594 job0: (groupid=0, jobs=1): err= 0: pid=828605: Thu Jul 25 19:01:48 2024 00:10:56.594 read: IOPS=20, BW=81.1KiB/s (83.0kB/s)(84.0KiB/1036msec) 00:10:56.594 slat (nsec): min=10497, max=46814, avg=30123.90, stdev=12665.34 00:10:56.594 clat (usec): min=40930, max=41125, avg=40970.99, stdev=41.74 00:10:56.594 lat (usec): min=40973, max=41135, avg=41001.12, stdev=34.14 00:10:56.594 clat percentiles (usec): 00:10:56.594 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:56.594 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:56.594 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:56.594 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:56.594 | 99.99th=[41157] 00:10:56.594 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:10:56.594 slat (usec): min=7, max=30475, avg=75.27, stdev=1346.17 00:10:56.594 clat (usec): min=172, max=460, avg=262.50, stdev=48.20 00:10:56.594 lat (usec): min=196, max=30794, avg=337.77, stdev=1349.63 00:10:56.594 clat percentiles (usec): 00:10:56.594 | 1.00th=[ 196], 5.00th=[ 208], 10.00th=[ 221], 20.00th=[ 229], 00:10:56.594 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 258], 00:10:56.594 | 70.00th=[ 281], 80.00th=[ 302], 90.00th=[ 330], 95.00th=[ 367], 00:10:56.594 | 99.00th=[ 424], 99.50th=[ 437], 99.90th=[ 461], 99.95th=[ 461], 00:10:56.594 | 99.99th=[ 461] 00:10:56.594 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:56.594 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:56.594 lat (usec) : 250=54.97%, 500=41.09% 00:10:56.594 lat (msec) : 50=3.94% 00:10:56.594 cpu : usr=0.48%, sys=1.06%, ctx=536, majf=0, minf=2 00:10:56.594 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.594 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.594 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.594 00:10:56.594 Run status group 0 (all jobs): 00:10:56.594 READ: bw=81.1KiB/s (83.0kB/s), 81.1KiB/s-81.1KiB/s (83.0kB/s-83.0kB/s), io=84.0KiB (86.0kB), run=1036-1036msec 00:10:56.594 WRITE: bw=1977KiB/s (2024kB/s), 1977KiB/s-1977KiB/s (2024kB/s-2024kB/s), io=2048KiB (2097kB), run=1036-1036msec 00:10:56.594 00:10:56.594 Disk stats (read/write): 00:10:56.594 nvme0n1: ios=43/512, merge=0/0, ticks=1681/129, in_queue=1810, util=98.70% 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:56.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:56.594 rmmod nvme_tcp 00:10:56.594 rmmod nvme_fabrics 00:10:56.594 rmmod nvme_keyring 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 827956 ']' 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 827956 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 827956 ']' 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 827956 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 827956 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 827956' 00:10:56.594 killing process with pid 827956 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 827956 00:10:56.594 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 827956 00:10:56.853 19:01:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:56.853 19:01:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:56.853 19:01:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:56.853 19:01:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:56.853 19:01:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:56.853 19:01:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.853 19:01:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.853 19:01:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.391 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:59.391 00:10:59.391 real 0m11.279s 00:10:59.391 user 0m26.040s 00:10:59.391 sys 0m2.717s 00:10:59.391 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.391 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:59.391 ************************************ 00:10:59.391 END TEST nvmf_nmic 00:10:59.391 ************************************ 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:59.392 ************************************ 00:10:59.392 START TEST nvmf_fio_target 00:10:59.392 ************************************ 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:59.392 * Looking for test storage... 00:10:59.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:10:59.392 19:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.921 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:01.921 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:11:01.921 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:01.921 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:01.921 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:01.921 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:01.921 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:01.921 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:11:01.921 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:01.921 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:11:01.921 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:11:01.921 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:11:01.921 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:11:01.921 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:11:01.921 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:11:01.921 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:01.921 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:01.921 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:01.921 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:01.921 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:01.921 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:01.921 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:01.922 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:01.922 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:01.922 Found net devices under 0000:09:00.0: cvl_0_0 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:01.922 Found net devices under 0000:09:00.1: cvl_0_1 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:01.922 19:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:01.922 19:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:01.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:11:01.922 00:11:01.922 --- 10.0.0.2 ping statistics --- 00:11:01.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.922 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:11:01.922 19:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:01.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:11:01.922 00:11:01.922 --- 10.0.0.1 ping statistics --- 00:11:01.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.922 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:11:01.922 19:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.922 19:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:11:01.922 19:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:01.922 19:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.922 19:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:01.922 19:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:01.922 19:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.922 19:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:01.922 19:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:01.922 19:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:01.922 19:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:01.922 19:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:01.922 19:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.922 19:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=831094 00:11:01.922 19:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 831094 00:11:01.922 19:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 831094 ']' 00:11:01.922 19:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.922 19:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:01.922 19:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:01.922 19:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.922 19:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:01.922 19:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.922 [2024-07-25 19:01:54.089986] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:01.922 [2024-07-25 19:01:54.090075] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.922 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.922 [2024-07-25 19:01:54.169234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:01.922 [2024-07-25 19:01:54.293571] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.922 [2024-07-25 19:01:54.293620] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.922 [2024-07-25 19:01:54.293636] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.922 [2024-07-25 19:01:54.293650] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.922 [2024-07-25 19:01:54.293661] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.922 [2024-07-25 19:01:54.293736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.922 [2024-07-25 19:01:54.293792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.922 [2024-07-25 19:01:54.293846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.922 [2024-07-25 19:01:54.293843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:02.857 19:01:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:02.857 19:01:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:11:02.857 19:01:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:02.857 19:01:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:02.857 19:01:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.857 19:01:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.857 19:01:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:02.857 [2024-07-25 19:01:55.324795] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:03.115 19:01:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:03.373 19:01:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:03.373 19:01:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:03.631 19:01:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:03.631 19:01:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:03.888 19:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:03.888 19:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:04.146 19:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:04.146 19:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:04.405 19:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:04.663 19:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:04.663 19:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:04.921 19:01:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:04.921 19:01:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:05.179 19:01:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:05.179 19:01:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:05.437 19:01:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:05.695 19:01:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:05.695 19:01:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:05.952 19:01:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:05.952 19:01:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:06.237 19:01:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.507 [2024-07-25 19:01:58.703677] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:06.507 19:01:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:06.507 19:01:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:06.765 19:01:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:07.698 19:01:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:07.699 19:01:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:07.699 19:01:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:07.699 19:01:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:07.699 19:01:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:07.699 19:01:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:09.599 19:02:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:09.599 19:02:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:09.599 19:02:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:09.599 19:02:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:09.599 19:02:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:09.599 19:02:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:09.599 19:02:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:09.599 [global] 00:11:09.599 thread=1 00:11:09.599 invalidate=1 00:11:09.599 rw=write 00:11:09.599 time_based=1 00:11:09.599 runtime=1 00:11:09.599 ioengine=libaio 00:11:09.599 direct=1 00:11:09.599 bs=4096 00:11:09.599 iodepth=1 00:11:09.599 norandommap=0 00:11:09.599 numjobs=1 00:11:09.599 00:11:09.599 verify_dump=1 00:11:09.599 verify_backlog=512 00:11:09.599 verify_state_save=0 00:11:09.599 do_verify=1 00:11:09.599 verify=crc32c-intel 00:11:09.599 [job0] 00:11:09.599 filename=/dev/nvme0n1 00:11:09.599 [job1] 00:11:09.599 filename=/dev/nvme0n2 00:11:09.599 [job2] 00:11:09.599 filename=/dev/nvme0n3 00:11:09.599 [job3] 00:11:09.599 filename=/dev/nvme0n4 00:11:09.599 Could not set queue depth (nvme0n1) 00:11:09.599 Could not set queue depth (nvme0n2) 00:11:09.599 Could not set queue depth (nvme0n3) 00:11:09.599 Could not set queue depth (nvme0n4) 00:11:09.857 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.857 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.857 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.857 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.857 fio-3.35 00:11:09.857 Starting 4 threads 00:11:11.231 00:11:11.231 job0: (groupid=0, jobs=1): err= 0: pid=832181: Thu Jul 25 19:02:03 2024 00:11:11.231 read: IOPS=46, BW=186KiB/s (191kB/s)(192KiB/1030msec) 00:11:11.231 slat (nsec): min=7428, max=34578, avg=22659.58, stdev=8939.90 00:11:11.231 clat (usec): min=305, max=42017, avg=18319.36, stdev=20476.44 00:11:11.231 lat (usec): min=313, max=42033, avg=18342.02, stdev=20483.78 00:11:11.231 clat percentiles (usec): 00:11:11.231 | 1.00th=[ 306], 5.00th=[ 343], 10.00th=[ 359], 20.00th=[ 457], 00:11:11.231 | 30.00th=[ 478], 40.00th=[ 490], 50.00th=[ 498], 60.00th=[41157], 00:11:11.231 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:11:11.231 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:11.231 | 99.99th=[42206] 00:11:11.231 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:11:11.231 slat (nsec): min=6801, max=37890, avg=10876.21, stdev=4149.92 00:11:11.231 clat (usec): min=191, max=732, avg=277.26, stdev=82.15 00:11:11.231 lat (usec): min=201, max=740, avg=288.14, stdev=83.54 00:11:11.231 clat percentiles (usec): 00:11:11.231 | 1.00th=[ 198], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 223], 00:11:11.231 | 30.00th=[ 233], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 249], 00:11:11.231 | 70.00th=[ 269], 80.00th=[ 326], 90.00th=[ 400], 95.00th=[ 449], 00:11:11.231 | 99.00th=[ 586], 99.50th=[ 594], 99.90th=[ 734], 99.95th=[ 734], 00:11:11.231 | 99.99th=[ 734] 00:11:11.231 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:11.231 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:11.231 lat (usec) : 250=56.25%, 500=36.43%, 750=3.57% 00:11:11.231 lat (msec) : 50=3.75% 00:11:11.231 cpu : usr=0.39%, sys=0.49%, ctx=562, majf=0, minf=1 00:11:11.231 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.231 issued rwts: total=48,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.231 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.231 job1: (groupid=0, jobs=1): err= 0: pid=832182: Thu Jul 25 19:02:03 2024 00:11:11.231 read: IOPS=20, BW=82.9KiB/s (84.9kB/s)(84.0KiB/1013msec) 00:11:11.231 slat (nsec): min=12676, max=37420, avg=31465.33, stdev=8198.27 00:11:11.231 clat (usec): min=40907, max=42016, avg=41313.30, stdev=476.13 00:11:11.231 lat (usec): min=40942, max=42052, avg=41344.76, stdev=475.94 00:11:11.231 clat percentiles (usec): 00:11:11.231 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:11.231 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:11.231 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:11:11.231 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:11.231 | 99.99th=[42206] 00:11:11.231 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:11:11.231 slat (nsec): min=8668, max=42677, avg=13102.22, stdev=4525.01 00:11:11.231 clat (usec): min=202, max=492, avg=265.49, stdev=55.66 00:11:11.231 lat (usec): min=212, max=523, avg=278.59, stdev=56.29 00:11:11.231 clat percentiles (usec): 00:11:11.231 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 229], 00:11:11.231 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 253], 00:11:11.231 | 70.00th=[ 262], 80.00th=[ 285], 90.00th=[ 355], 95.00th=[ 408], 00:11:11.231 | 99.00th=[ 437], 99.50th=[ 469], 99.90th=[ 494], 99.95th=[ 494], 00:11:11.231 | 99.99th=[ 494] 00:11:11.231 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:11.231 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:11.231 lat (usec) : 250=52.35%, 500=43.71% 00:11:11.231 lat (msec) : 50=3.94% 00:11:11.231 cpu : usr=0.40%, sys=0.89%, ctx=535, majf=0, minf=1 00:11:11.231 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.231 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.231 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.231 job2: (groupid=0, jobs=1): err= 0: pid=832183: Thu Jul 25 19:02:03 2024 00:11:11.231 read: IOPS=22, BW=88.5KiB/s (90.6kB/s)(92.0KiB/1040msec) 00:11:11.231 slat (nsec): min=9509, max=34568, avg=28997.26, stdev=9105.20 00:11:11.231 clat (usec): min=490, max=42023, avg=39507.97, stdev=8519.02 00:11:11.231 lat (usec): min=499, max=42039, avg=39536.97, stdev=8523.21 00:11:11.231 clat percentiles (usec): 00:11:11.231 | 1.00th=[ 490], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:11.231 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:11.231 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:11.231 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:11.231 | 99.99th=[42206] 00:11:11.231 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:11:11.231 slat (nsec): min=6742, max=28051, avg=9639.54, stdev=3295.02 00:11:11.231 clat (usec): min=207, max=423, avg=242.19, stdev=20.15 00:11:11.231 lat (usec): min=214, max=435, avg=251.83, stdev=21.08 00:11:11.231 clat percentiles (usec): 00:11:11.231 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 227], 00:11:11.231 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:11:11.231 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 269], 00:11:11.231 | 99.00th=[ 306], 99.50th=[ 371], 99.90th=[ 424], 99.95th=[ 424], 00:11:11.231 | 99.99th=[ 424] 00:11:11.231 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:11.231 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:11.231 lat (usec) : 250=70.28%, 500=25.61% 00:11:11.231 lat (msec) : 50=4.11% 00:11:11.231 cpu : usr=0.19%, sys=0.48%, ctx=536, majf=0, minf=2 00:11:11.231 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.231 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.231 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.231 job3: (groupid=0, jobs=1): err= 0: pid=832184: Thu Jul 25 19:02:03 2024 00:11:11.231 read: IOPS=20, BW=81.7KiB/s (83.7kB/s)(84.0KiB/1028msec) 00:11:11.231 slat (nsec): min=9822, max=33455, avg=30462.90, stdev=7044.80 00:11:11.231 clat (usec): min=40864, max=42230, avg=41349.52, stdev=514.51 00:11:11.231 lat (usec): min=40897, max=42240, avg=41379.98, stdev=512.03 00:11:11.231 clat percentiles (usec): 00:11:11.231 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:11.231 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:11.231 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:11.231 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:11.231 | 99.99th=[42206] 00:11:11.231 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:11:11.231 slat (nsec): min=6872, max=47138, avg=11170.98, stdev=4997.41 00:11:11.231 clat (usec): min=207, max=1895, avg=295.87, stdev=120.82 00:11:11.231 lat (usec): min=217, max=1907, avg=307.04, stdev=121.66 00:11:11.231 clat percentiles (usec): 00:11:11.231 | 1.00th=[ 212], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 241], 00:11:11.231 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 269], 00:11:11.231 | 70.00th=[ 297], 80.00th=[ 343], 90.00th=[ 396], 95.00th=[ 433], 00:11:11.231 | 99.00th=[ 570], 99.50th=[ 1123], 99.90th=[ 1893], 99.95th=[ 1893], 00:11:11.231 | 99.99th=[ 1893] 00:11:11.231 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:11.231 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:11.231 lat (usec) : 250=39.59%, 500=54.22%, 750=1.50% 00:11:11.231 lat (msec) : 2=0.75%, 50=3.94% 00:11:11.231 cpu : usr=0.29%, sys=0.58%, ctx=533, majf=0, minf=1 00:11:11.231 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.231 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.231 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.231 00:11:11.231 Run status group 0 (all jobs): 00:11:11.231 READ: bw=435KiB/s (445kB/s), 81.7KiB/s-186KiB/s (83.7kB/s-191kB/s), io=452KiB (463kB), run=1013-1040msec 00:11:11.231 WRITE: bw=7877KiB/s (8066kB/s), 1969KiB/s-2022KiB/s (2016kB/s-2070kB/s), io=8192KiB (8389kB), run=1013-1040msec 00:11:11.231 00:11:11.231 Disk stats (read/write): 00:11:11.231 nvme0n1: ios=95/512, merge=0/0, ticks=1008/137, in_queue=1145, util=96.99% 00:11:11.231 nvme0n2: ios=40/512, merge=0/0, ticks=1653/127, in_queue=1780, util=97.14% 00:11:11.231 nvme0n3: ios=41/512, merge=0/0, ticks=1654/121, in_queue=1775, util=97.15% 00:11:11.231 nvme0n4: ios=16/512, merge=0/0, ticks=661/150, in_queue=811, util=89.55% 00:11:11.231 19:02:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:11.231 [global] 00:11:11.231 thread=1 00:11:11.231 invalidate=1 00:11:11.231 rw=randwrite 00:11:11.231 time_based=1 00:11:11.231 runtime=1 00:11:11.231 ioengine=libaio 00:11:11.231 direct=1 00:11:11.231 bs=4096 00:11:11.231 iodepth=1 00:11:11.231 norandommap=0 00:11:11.231 numjobs=1 00:11:11.231 00:11:11.231 verify_dump=1 00:11:11.231 verify_backlog=512 00:11:11.231 verify_state_save=0 00:11:11.231 do_verify=1 00:11:11.231 verify=crc32c-intel 00:11:11.231 [job0] 00:11:11.231 filename=/dev/nvme0n1 00:11:11.231 [job1] 00:11:11.231 filename=/dev/nvme0n2 00:11:11.231 [job2] 00:11:11.231 filename=/dev/nvme0n3 00:11:11.231 [job3] 00:11:11.231 filename=/dev/nvme0n4 00:11:11.231 Could not set queue depth (nvme0n1) 00:11:11.231 Could not set queue depth (nvme0n2) 00:11:11.231 Could not set queue depth (nvme0n3) 00:11:11.231 Could not set queue depth (nvme0n4) 00:11:11.231 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:11.231 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:11.231 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:11.231 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:11.231 fio-3.35 00:11:11.231 Starting 4 threads 00:11:12.603 00:11:12.603 job0: (groupid=0, jobs=1): err= 0: pid=832532: Thu Jul 25 19:02:04 2024 00:11:12.603 read: IOPS=1112, BW=4452KiB/s (4558kB/s)(4456KiB/1001msec) 00:11:12.603 slat (nsec): min=5691, max=53344, avg=17443.75, stdev=5138.93 00:11:12.603 clat (usec): min=362, max=1017, avg=445.27, stdev=77.59 00:11:12.603 lat (usec): min=368, max=1050, avg=462.72, stdev=80.06 00:11:12.603 clat percentiles (usec): 00:11:12.603 | 1.00th=[ 371], 5.00th=[ 388], 10.00th=[ 408], 20.00th=[ 416], 00:11:12.603 | 30.00th=[ 424], 40.00th=[ 424], 50.00th=[ 429], 60.00th=[ 433], 00:11:12.603 | 70.00th=[ 437], 80.00th=[ 445], 90.00th=[ 490], 95.00th=[ 545], 00:11:12.603 | 99.00th=[ 930], 99.50th=[ 963], 99.90th=[ 1004], 99.95th=[ 1020], 00:11:12.603 | 99.99th=[ 1020] 00:11:12.603 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:12.603 slat (nsec): min=7572, max=67844, avg=21007.58, stdev=9876.35 00:11:12.603 clat (usec): min=181, max=2939, avg=285.08, stdev=131.98 00:11:12.603 lat (usec): min=189, max=2950, avg=306.08, stdev=135.77 00:11:12.603 clat percentiles (usec): 00:11:12.603 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 212], 00:11:12.603 | 30.00th=[ 229], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 255], 00:11:12.603 | 70.00th=[ 269], 80.00th=[ 363], 90.00th=[ 441], 95.00th=[ 494], 00:11:12.603 | 99.00th=[ 562], 99.50th=[ 586], 99.90th=[ 2900], 99.95th=[ 2933], 00:11:12.603 | 99.99th=[ 2933] 00:11:12.603 bw ( KiB/s): min= 5392, max= 5392, per=37.95%, avg=5392.00, stdev= 0.00, samples=1 00:11:12.603 iops : min= 1348, max= 1348, avg=1348.00, stdev= 0.00, samples=1 00:11:12.603 lat (usec) : 250=30.79%, 500=63.36%, 750=5.02%, 1000=0.68% 00:11:12.603 lat (msec) : 2=0.08%, 4=0.08% 00:11:12.603 cpu : usr=3.90%, sys=7.00%, ctx=2651, majf=0, minf=1 00:11:12.603 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:12.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.603 issued rwts: total=1114,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.603 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:12.603 job1: (groupid=0, jobs=1): err= 0: pid=832533: Thu Jul 25 19:02:04 2024 00:11:12.603 read: IOPS=121, BW=486KiB/s (498kB/s)(488KiB/1004msec) 00:11:12.603 slat (nsec): min=8030, max=47921, avg=26985.18, stdev=8707.29 00:11:12.603 clat (usec): min=425, max=42068, avg=6607.04, stdev=14583.77 00:11:12.603 lat (usec): min=457, max=42100, avg=6634.03, stdev=14584.16 00:11:12.603 clat percentiles (usec): 00:11:12.603 | 1.00th=[ 437], 5.00th=[ 453], 10.00th=[ 465], 20.00th=[ 486], 00:11:12.603 | 30.00th=[ 502], 40.00th=[ 523], 50.00th=[ 545], 60.00th=[ 578], 00:11:12.603 | 70.00th=[ 603], 80.00th=[ 693], 90.00th=[41157], 95.00th=[41681], 00:11:12.603 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:12.603 | 99.99th=[42206] 00:11:12.603 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:11:12.603 slat (nsec): min=7919, max=58005, avg=25833.23, stdev=10886.06 00:11:12.603 clat (usec): min=202, max=1556, avg=344.83, stdev=132.01 00:11:12.603 lat (usec): min=210, max=1594, avg=370.66, stdev=133.36 00:11:12.603 clat percentiles (usec): 00:11:12.603 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 247], 00:11:12.603 | 30.00th=[ 269], 40.00th=[ 289], 50.00th=[ 330], 60.00th=[ 355], 00:11:12.603 | 70.00th=[ 375], 80.00th=[ 404], 90.00th=[ 461], 95.00th=[ 586], 00:11:12.603 | 99.00th=[ 775], 99.50th=[ 1188], 99.90th=[ 1565], 99.95th=[ 1565], 00:11:12.603 | 99.99th=[ 1565] 00:11:12.603 bw ( KiB/s): min= 4096, max= 4096, per=28.83%, avg=4096.00, stdev= 0.00, samples=1 00:11:12.603 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:12.603 lat (usec) : 250=17.03%, 500=63.56%, 750=15.14%, 1000=0.79% 00:11:12.603 lat (msec) : 2=0.47%, 4=0.16%, 50=2.84% 00:11:12.603 cpu : usr=1.30%, sys=1.30%, ctx=634, majf=0, minf=2 00:11:12.603 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:12.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.604 issued rwts: total=122,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.604 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:12.604 job2: (groupid=0, jobs=1): err= 0: pid=832536: Thu Jul 25 19:02:04 2024 00:11:12.604 read: IOPS=858, BW=3433KiB/s (3515kB/s)(3464KiB/1009msec) 00:11:12.604 slat (nsec): min=9400, max=43820, avg=20207.92, stdev=4225.92 00:11:12.604 clat (usec): min=399, max=44976, avg=750.19, stdev=3431.28 00:11:12.604 lat (usec): min=410, max=44995, avg=770.40, stdev=3431.65 00:11:12.604 clat percentiles (usec): 00:11:12.604 | 1.00th=[ 429], 5.00th=[ 437], 10.00th=[ 441], 20.00th=[ 445], 00:11:12.604 | 30.00th=[ 449], 40.00th=[ 449], 50.00th=[ 453], 60.00th=[ 457], 00:11:12.604 | 70.00th=[ 465], 80.00th=[ 478], 90.00th=[ 506], 95.00th=[ 545], 00:11:12.604 | 99.00th=[ 611], 99.50th=[41157], 99.90th=[44827], 99.95th=[44827], 00:11:12.604 | 99.99th=[44827] 00:11:12.604 write: IOPS=1014, BW=4059KiB/s (4157kB/s)(4096KiB/1009msec); 0 zone resets 00:11:12.604 slat (nsec): min=8126, max=68431, avg=26417.46, stdev=8577.15 00:11:12.604 clat (usec): min=220, max=1027, avg=294.97, stdev=68.55 00:11:12.604 lat (usec): min=235, max=1070, avg=321.39, stdev=72.06 00:11:12.604 clat percentiles (usec): 00:11:12.604 | 1.00th=[ 235], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 251], 00:11:12.604 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 281], 00:11:12.604 | 70.00th=[ 302], 80.00th=[ 334], 90.00th=[ 392], 95.00th=[ 424], 00:11:12.604 | 99.00th=[ 461], 99.50th=[ 469], 99.90th=[ 971], 99.95th=[ 1029], 00:11:12.604 | 99.99th=[ 1029] 00:11:12.604 bw ( KiB/s): min= 2256, max= 5936, per=28.83%, avg=4096.00, stdev=2602.15, samples=2 00:11:12.604 iops : min= 564, max= 1484, avg=1024.00, stdev=650.54, samples=2 00:11:12.604 lat (usec) : 250=9.10%, 500=85.19%, 750=5.19%, 1000=0.16% 00:11:12.604 lat (msec) : 2=0.05%, 50=0.32% 00:11:12.604 cpu : usr=2.68%, sys=6.35%, ctx=1891, majf=0, minf=1 00:11:12.604 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:12.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.604 issued rwts: total=866,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.604 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:12.604 job3: (groupid=0, jobs=1): err= 0: pid=832537: Thu Jul 25 19:02:04 2024 00:11:12.604 read: IOPS=19, BW=79.8KiB/s (81.7kB/s)(80.0KiB/1003msec) 00:11:12.604 slat (nsec): min=14982, max=36660, avg=31188.35, stdev=8060.14 00:11:12.604 clat (usec): min=1239, max=41002, avg=38620.70, stdev=8914.09 00:11:12.604 lat (usec): min=1258, max=41038, avg=38651.88, stdev=8917.68 00:11:12.604 clat percentiles (usec): 00:11:12.604 | 1.00th=[ 1237], 5.00th=[ 1237], 10.00th=[34341], 20.00th=[40633], 00:11:12.604 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:12.604 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:12.604 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:12.604 | 99.99th=[41157] 00:11:12.604 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:11:12.604 slat (usec): min=9, max=100, avg=34.78, stdev=14.73 00:11:12.604 clat (usec): min=168, max=959, avg=405.12, stdev=64.51 00:11:12.604 lat (usec): min=268, max=971, avg=439.90, stdev=68.21 00:11:12.604 clat percentiles (usec): 00:11:12.604 | 1.00th=[ 273], 5.00th=[ 310], 10.00th=[ 330], 20.00th=[ 355], 00:11:12.604 | 30.00th=[ 375], 40.00th=[ 396], 50.00th=[ 408], 60.00th=[ 424], 00:11:12.604 | 70.00th=[ 433], 80.00th=[ 445], 90.00th=[ 469], 95.00th=[ 506], 00:11:12.604 | 99.00th=[ 537], 99.50th=[ 644], 99.90th=[ 963], 99.95th=[ 963], 00:11:12.604 | 99.99th=[ 963] 00:11:12.604 bw ( KiB/s): min= 4096, max= 4096, per=28.83%, avg=4096.00, stdev= 0.00, samples=1 00:11:12.604 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:12.604 lat (usec) : 250=0.19%, 500=90.79%, 750=4.89%, 1000=0.38% 00:11:12.604 lat (msec) : 2=0.19%, 50=3.57% 00:11:12.604 cpu : usr=1.10%, sys=2.20%, ctx=534, majf=0, minf=1 00:11:12.604 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:12.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.604 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.604 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:12.604 00:11:12.604 Run status group 0 (all jobs): 00:11:12.604 READ: bw=8412KiB/s (8614kB/s), 79.8KiB/s-4452KiB/s (81.7kB/s-4558kB/s), io=8488KiB (8692kB), run=1001-1009msec 00:11:12.604 WRITE: bw=13.9MiB/s (14.5MB/s), 2040KiB/s-6138KiB/s (2089kB/s-6285kB/s), io=14.0MiB (14.7MB), run=1001-1009msec 00:11:12.604 00:11:12.604 Disk stats (read/write): 00:11:12.604 nvme0n1: ios=1066/1052, merge=0/0, ticks=617/314, in_queue=931, util=97.39% 00:11:12.604 nvme0n2: ios=167/512, merge=0/0, ticks=1012/164, in_queue=1176, util=92.68% 00:11:12.604 nvme0n3: ios=918/1024, merge=0/0, ticks=590/283, in_queue=873, util=93.51% 00:11:12.604 nvme0n4: ios=74/512, merge=0/0, ticks=1005/158, in_queue=1163, util=97.67% 00:11:12.604 19:02:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:12.604 [global] 00:11:12.604 thread=1 00:11:12.604 invalidate=1 00:11:12.604 rw=write 00:11:12.604 time_based=1 00:11:12.604 runtime=1 00:11:12.604 ioengine=libaio 00:11:12.604 direct=1 00:11:12.604 bs=4096 00:11:12.604 iodepth=128 00:11:12.604 norandommap=0 00:11:12.604 numjobs=1 00:11:12.604 00:11:12.604 verify_dump=1 00:11:12.604 verify_backlog=512 00:11:12.604 verify_state_save=0 00:11:12.604 do_verify=1 00:11:12.604 verify=crc32c-intel 00:11:12.604 [job0] 00:11:12.604 filename=/dev/nvme0n1 00:11:12.604 [job1] 00:11:12.604 filename=/dev/nvme0n2 00:11:12.604 [job2] 00:11:12.604 filename=/dev/nvme0n3 00:11:12.604 [job3] 00:11:12.604 filename=/dev/nvme0n4 00:11:12.604 Could not set queue depth (nvme0n1) 00:11:12.604 Could not set queue depth (nvme0n2) 00:11:12.604 Could not set queue depth (nvme0n3) 00:11:12.604 Could not set queue depth (nvme0n4) 00:11:12.604 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:12.604 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:12.604 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:12.604 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:12.604 fio-3.35 00:11:12.604 Starting 4 threads 00:11:13.990 00:11:13.990 job0: (groupid=0, jobs=1): err= 0: pid=832765: Thu Jul 25 19:02:06 2024 00:11:13.990 read: IOPS=4433, BW=17.3MiB/s (18.2MB/s)(17.4MiB/1005msec) 00:11:13.990 slat (usec): min=2, max=17340, avg=94.30, stdev=637.23 00:11:13.990 clat (usec): min=1341, max=29649, avg=12499.86, stdev=4317.84 00:11:13.990 lat (usec): min=1500, max=30007, avg=12594.17, stdev=4352.70 00:11:13.990 clat percentiles (usec): 00:11:13.990 | 1.00th=[ 1713], 5.00th=[ 7898], 10.00th=[ 9241], 20.00th=[10290], 00:11:13.990 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11600], 60.00th=[11994], 00:11:13.990 | 70.00th=[12780], 80.00th=[13698], 90.00th=[18482], 95.00th=[22938], 00:11:13.990 | 99.00th=[26870], 99.50th=[27919], 99.90th=[29492], 99.95th=[29754], 00:11:13.990 | 99.99th=[29754] 00:11:13.990 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:11:13.990 slat (usec): min=3, max=19122, avg=116.81, stdev=727.13 00:11:13.990 clat (usec): min=3049, max=45714, avg=14887.69, stdev=8096.42 00:11:13.990 lat (usec): min=4871, max=45760, avg=15004.50, stdev=8161.65 00:11:13.990 clat percentiles (usec): 00:11:13.990 | 1.00th=[ 6587], 5.00th=[ 8291], 10.00th=[ 9110], 20.00th=[ 9634], 00:11:13.990 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11076], 60.00th=[11731], 00:11:13.990 | 70.00th=[13829], 80.00th=[20841], 90.00th=[29230], 95.00th=[33162], 00:11:13.990 | 99.00th=[36963], 99.50th=[39584], 99.90th=[40109], 99.95th=[43779], 00:11:13.990 | 99.99th=[45876] 00:11:13.990 bw ( KiB/s): min=13296, max=23568, per=28.95%, avg=18432.00, stdev=7263.40, samples=2 00:11:13.990 iops : min= 3324, max= 5892, avg=4608.00, stdev=1815.85, samples=2 00:11:13.990 lat (msec) : 2=0.74%, 4=0.56%, 10=21.79%, 20=61.75%, 50=15.16% 00:11:13.990 cpu : usr=3.59%, sys=6.47%, ctx=401, majf=0, minf=1 00:11:13.990 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:13.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.990 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.990 issued rwts: total=4456,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.990 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.990 job1: (groupid=0, jobs=1): err= 0: pid=832766: Thu Jul 25 19:02:06 2024 00:11:13.990 read: IOPS=2923, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1007msec) 00:11:13.990 slat (usec): min=2, max=25042, avg=175.65, stdev=1262.75 00:11:13.990 clat (usec): min=1894, max=102178, avg=23759.83, stdev=21071.96 00:11:13.990 lat (usec): min=1967, max=119158, avg=23935.48, stdev=21186.93 00:11:13.990 clat percentiles (msec): 00:11:13.990 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 11], 00:11:13.990 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 16], 00:11:13.990 | 70.00th=[ 23], 80.00th=[ 34], 90.00th=[ 61], 95.00th=[ 71], 00:11:13.991 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 103], 99.95th=[ 103], 00:11:13.991 | 99.99th=[ 103] 00:11:13.991 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:11:13.991 slat (usec): min=3, max=10001, avg=137.07, stdev=733.60 00:11:13.991 clat (usec): min=895, max=111231, avg=18834.29, stdev=20440.14 00:11:13.991 lat (usec): min=915, max=111250, avg=18971.36, stdev=20583.94 00:11:13.991 clat percentiles (msec): 00:11:13.991 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 10], 00:11:13.991 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 13], 00:11:13.991 | 70.00th=[ 14], 80.00th=[ 16], 90.00th=[ 50], 95.00th=[ 66], 00:11:13.991 | 99.00th=[ 105], 99.50th=[ 106], 99.90th=[ 112], 99.95th=[ 112], 00:11:13.991 | 99.99th=[ 112] 00:11:13.991 bw ( KiB/s): min= 4096, max=20480, per=19.30%, avg=12288.00, stdev=11585.24, samples=2 00:11:13.991 iops : min= 1024, max= 5120, avg=3072.00, stdev=2896.31, samples=2 00:11:13.991 lat (usec) : 1000=0.05% 00:11:13.991 lat (msec) : 2=0.30%, 4=1.43%, 10=16.41%, 20=56.70%, 50=13.55% 00:11:13.991 lat (msec) : 100=10.56%, 250=1.01% 00:11:13.991 cpu : usr=4.47%, sys=6.96%, ctx=397, majf=0, minf=1 00:11:13.991 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:13.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.991 issued rwts: total=2944,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.991 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.991 job2: (groupid=0, jobs=1): err= 0: pid=832767: Thu Jul 25 19:02:06 2024 00:11:13.991 read: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1013msec) 00:11:13.991 slat (usec): min=2, max=22920, avg=115.44, stdev=845.06 00:11:13.991 clat (usec): min=4429, max=52044, avg=15873.28, stdev=6394.53 00:11:13.991 lat (usec): min=4433, max=52081, avg=15988.71, stdev=6457.60 00:11:13.991 clat percentiles (usec): 00:11:13.991 | 1.00th=[ 7963], 5.00th=[10552], 10.00th=[11207], 20.00th=[11994], 00:11:13.991 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13566], 60.00th=[14222], 00:11:13.991 | 70.00th=[15533], 80.00th=[19792], 90.00th=[23987], 95.00th=[28967], 00:11:13.991 | 99.00th=[40633], 99.50th=[40633], 99.90th=[44303], 99.95th=[47973], 00:11:13.991 | 99.99th=[52167] 00:11:13.991 write: IOPS=3788, BW=14.8MiB/s (15.5MB/s)(15.0MiB/1013msec); 0 zone resets 00:11:13.991 slat (usec): min=3, max=13655, avg=123.11, stdev=810.67 00:11:13.991 clat (usec): min=2441, max=76448, avg=18529.29, stdev=12470.33 00:11:13.991 lat (usec): min=2449, max=76455, avg=18652.40, stdev=12549.24 00:11:13.991 clat percentiles (usec): 00:11:13.991 | 1.00th=[ 4228], 5.00th=[ 6521], 10.00th=[ 7373], 20.00th=[ 9765], 00:11:13.991 | 30.00th=[11338], 40.00th=[12649], 50.00th=[15664], 60.00th=[18220], 00:11:13.991 | 70.00th=[21365], 80.00th=[23462], 90.00th=[29492], 95.00th=[50070], 00:11:13.991 | 99.00th=[65799], 99.50th=[66847], 99.90th=[73925], 99.95th=[73925], 00:11:13.991 | 99.99th=[76022] 00:11:13.991 bw ( KiB/s): min=14280, max=15400, per=23.31%, avg=14840.00, stdev=791.96, samples=2 00:11:13.991 iops : min= 3570, max= 3850, avg=3710.00, stdev=197.99, samples=2 00:11:13.991 lat (msec) : 4=0.43%, 10=13.78%, 20=59.65%, 50=23.43%, 100=2.71% 00:11:13.991 cpu : usr=3.16%, sys=5.83%, ctx=321, majf=0, minf=1 00:11:13.991 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:13.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.991 issued rwts: total=3584,3838,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.991 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.991 job3: (groupid=0, jobs=1): err= 0: pid=832770: Thu Jul 25 19:02:06 2024 00:11:13.991 read: IOPS=4420, BW=17.3MiB/s (18.1MB/s)(17.4MiB/1005msec) 00:11:13.991 slat (usec): min=2, max=48689, avg=104.02, stdev=1028.19 00:11:13.991 clat (usec): min=3092, max=64820, avg=14841.29, stdev=7195.67 00:11:13.991 lat (usec): min=3107, max=64857, avg=14945.32, stdev=7249.81 00:11:13.991 clat percentiles (usec): 00:11:13.991 | 1.00th=[ 4817], 5.00th=[ 8029], 10.00th=[ 8979], 20.00th=[11338], 00:11:13.991 | 30.00th=[12125], 40.00th=[12649], 50.00th=[13698], 60.00th=[14484], 00:11:13.991 | 70.00th=[15270], 80.00th=[16909], 90.00th=[19268], 95.00th=[22938], 00:11:13.991 | 99.00th=[50070], 99.50th=[50070], 99.90th=[50070], 99.95th=[52691], 00:11:13.991 | 99.99th=[64750] 00:11:13.991 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:11:13.991 slat (usec): min=3, max=10804, avg=101.87, stdev=627.40 00:11:13.991 clat (usec): min=1207, max=39996, avg=13200.58, stdev=5493.67 00:11:13.991 lat (usec): min=1223, max=40010, avg=13302.45, stdev=5511.88 00:11:13.991 clat percentiles (usec): 00:11:13.991 | 1.00th=[ 4293], 5.00th=[ 7242], 10.00th=[ 7635], 20.00th=[ 9241], 00:11:13.991 | 30.00th=[10814], 40.00th=[11863], 50.00th=[12518], 60.00th=[13698], 00:11:13.991 | 70.00th=[14615], 80.00th=[15664], 90.00th=[17695], 95.00th=[19006], 00:11:13.991 | 99.00th=[39584], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:11:13.991 | 99.99th=[40109] 00:11:13.991 bw ( KiB/s): min=18416, max=18448, per=28.95%, avg=18432.00, stdev=22.63, samples=2 00:11:13.991 iops : min= 4604, max= 4612, avg=4608.00, stdev= 5.66, samples=2 00:11:13.991 lat (msec) : 2=0.09%, 4=0.57%, 10=18.15%, 20=74.83%, 50=5.79% 00:11:13.991 lat (msec) : 100=0.56% 00:11:13.991 cpu : usr=4.68%, sys=7.97%, ctx=373, majf=0, minf=1 00:11:13.991 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:13.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.991 issued rwts: total=4443,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.991 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.991 00:11:13.991 Run status group 0 (all jobs): 00:11:13.991 READ: bw=59.5MiB/s (62.4MB/s), 11.4MiB/s-17.3MiB/s (12.0MB/s-18.2MB/s), io=60.3MiB (63.2MB), run=1005-1013msec 00:11:13.991 WRITE: bw=62.2MiB/s (65.2MB/s), 11.9MiB/s-17.9MiB/s (12.5MB/s-18.8MB/s), io=63.0MiB (66.1MB), run=1005-1013msec 00:11:13.991 00:11:13.991 Disk stats (read/write): 00:11:13.991 nvme0n1: ios=3611/3701, merge=0/0, ticks=22446/25316, in_queue=47762, util=97.80% 00:11:13.991 nvme0n2: ios=2575/3007, merge=0/0, ticks=20085/21896, in_queue=41981, util=86.99% 00:11:13.991 nvme0n3: ios=3113/3359, merge=0/0, ticks=30315/36531, in_queue=66846, util=96.45% 00:11:13.991 nvme0n4: ios=3608/3828, merge=0/0, ticks=42973/33635, in_queue=76608, util=99.68% 00:11:13.991 19:02:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:13.991 [global] 00:11:13.991 thread=1 00:11:13.991 invalidate=1 00:11:13.991 rw=randwrite 00:11:13.991 time_based=1 00:11:13.991 runtime=1 00:11:13.991 ioengine=libaio 00:11:13.991 direct=1 00:11:13.991 bs=4096 00:11:13.991 iodepth=128 00:11:13.991 norandommap=0 00:11:13.991 numjobs=1 00:11:13.991 00:11:13.991 verify_dump=1 00:11:13.991 verify_backlog=512 00:11:13.991 verify_state_save=0 00:11:13.991 do_verify=1 00:11:13.991 verify=crc32c-intel 00:11:13.991 [job0] 00:11:13.991 filename=/dev/nvme0n1 00:11:13.991 [job1] 00:11:13.991 filename=/dev/nvme0n2 00:11:13.991 [job2] 00:11:13.991 filename=/dev/nvme0n3 00:11:13.991 [job3] 00:11:13.991 filename=/dev/nvme0n4 00:11:13.991 Could not set queue depth (nvme0n1) 00:11:13.991 Could not set queue depth (nvme0n2) 00:11:13.991 Could not set queue depth (nvme0n3) 00:11:13.991 Could not set queue depth (nvme0n4) 00:11:13.991 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:13.991 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:13.991 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:13.991 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:13.991 fio-3.35 00:11:13.991 Starting 4 threads 00:11:15.367 00:11:15.367 job0: (groupid=0, jobs=1): err= 0: pid=833000: Thu Jul 25 19:02:07 2024 00:11:15.367 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:11:15.367 slat (usec): min=2, max=31010, avg=123.61, stdev=968.06 00:11:15.367 clat (usec): min=4651, max=55052, avg=15714.23, stdev=8846.96 00:11:15.367 lat (usec): min=4655, max=55067, avg=15837.84, stdev=8893.46 00:11:15.367 clat percentiles (usec): 00:11:15.367 | 1.00th=[ 4686], 5.00th=[ 6587], 10.00th=[ 8225], 20.00th=[10159], 00:11:15.367 | 30.00th=[10945], 40.00th=[12125], 50.00th=[13960], 60.00th=[14877], 00:11:15.367 | 70.00th=[16319], 80.00th=[18220], 90.00th=[23725], 95.00th=[36963], 00:11:15.367 | 99.00th=[49546], 99.50th=[50070], 99.90th=[54789], 99.95th=[54789], 00:11:15.367 | 99.99th=[55313] 00:11:15.367 write: IOPS=4143, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1004msec); 0 zone resets 00:11:15.367 slat (usec): min=3, max=19628, avg=112.05, stdev=776.25 00:11:15.367 clat (usec): min=1606, max=47470, avg=15071.58, stdev=5986.93 00:11:15.367 lat (usec): min=3977, max=55135, avg=15183.64, stdev=6039.80 00:11:15.367 clat percentiles (usec): 00:11:15.367 | 1.00th=[ 4752], 5.00th=[ 7177], 10.00th=[ 8979], 20.00th=[10683], 00:11:15.367 | 30.00th=[11600], 40.00th=[12518], 50.00th=[13829], 60.00th=[15664], 00:11:15.367 | 70.00th=[17171], 80.00th=[18744], 90.00th=[21627], 95.00th=[27132], 00:11:15.367 | 99.00th=[35390], 99.50th=[38536], 99.90th=[47449], 99.95th=[47449], 00:11:15.367 | 99.99th=[47449] 00:11:15.367 bw ( KiB/s): min=16176, max=16592, per=25.50%, avg=16384.00, stdev=294.16, samples=2 00:11:15.367 iops : min= 4044, max= 4148, avg=4096.00, stdev=73.54, samples=2 00:11:15.367 lat (msec) : 2=0.01%, 4=0.04%, 10=14.51%, 20=71.39%, 50=13.76% 00:11:15.367 lat (msec) : 100=0.29% 00:11:15.367 cpu : usr=3.39%, sys=5.68%, ctx=322, majf=0, minf=1 00:11:15.367 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:15.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:15.367 issued rwts: total=4096,4160,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.367 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:15.367 job1: (groupid=0, jobs=1): err= 0: pid=833001: Thu Jul 25 19:02:07 2024 00:11:15.367 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:11:15.367 slat (usec): min=2, max=13261, avg=127.84, stdev=826.97 00:11:15.367 clat (usec): min=5650, max=42024, avg=16601.02, stdev=5713.07 00:11:15.367 lat (usec): min=5681, max=42031, avg=16728.86, stdev=5760.47 00:11:15.367 clat percentiles (usec): 00:11:15.367 | 1.00th=[ 8029], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[11338], 00:11:15.367 | 30.00th=[12911], 40.00th=[13435], 50.00th=[15533], 60.00th=[17433], 00:11:15.367 | 70.00th=[19006], 80.00th=[21890], 90.00th=[25297], 95.00th=[27395], 00:11:15.367 | 99.00th=[31065], 99.50th=[32375], 99.90th=[33817], 99.95th=[34866], 00:11:15.367 | 99.99th=[42206] 00:11:15.367 write: IOPS=4350, BW=17.0MiB/s (17.8MB/s)(17.1MiB/1004msec); 0 zone resets 00:11:15.367 slat (usec): min=4, max=11732, avg=98.16, stdev=592.64 00:11:15.367 clat (usec): min=728, max=50112, avg=13586.31, stdev=4971.09 00:11:15.367 lat (usec): min=1760, max=50134, avg=13684.47, stdev=5000.52 00:11:15.367 clat percentiles (usec): 00:11:15.367 | 1.00th=[ 4555], 5.00th=[ 7111], 10.00th=[ 7963], 20.00th=[10159], 00:11:15.367 | 30.00th=[11863], 40.00th=[12649], 50.00th=[13173], 60.00th=[13698], 00:11:15.367 | 70.00th=[14484], 80.00th=[15926], 90.00th=[19792], 95.00th=[22676], 00:11:15.367 | 99.00th=[32113], 99.50th=[35914], 99.90th=[50070], 99.95th=[50070], 00:11:15.367 | 99.99th=[50070] 00:11:15.367 bw ( KiB/s): min=16552, max=17376, per=26.40%, avg=16964.00, stdev=582.66, samples=2 00:11:15.367 iops : min= 4138, max= 4344, avg=4241.00, stdev=145.66, samples=2 00:11:15.367 lat (usec) : 750=0.01% 00:11:15.367 lat (msec) : 2=0.02%, 4=0.21%, 10=14.54%, 20=67.66%, 50=17.47% 00:11:15.367 lat (msec) : 100=0.07% 00:11:15.367 cpu : usr=4.89%, sys=9.17%, ctx=341, majf=0, minf=1 00:11:15.367 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:15.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:15.367 issued rwts: total=4096,4368,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.367 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:15.367 job2: (groupid=0, jobs=1): err= 0: pid=833002: Thu Jul 25 19:02:07 2024 00:11:15.367 read: IOPS=3150, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1006msec) 00:11:15.367 slat (usec): min=2, max=17668, avg=148.23, stdev=994.09 00:11:15.367 clat (usec): min=3328, max=47506, avg=19150.10, stdev=8037.67 00:11:15.367 lat (usec): min=4882, max=47524, avg=19298.34, stdev=8085.25 00:11:15.367 clat percentiles (usec): 00:11:15.367 | 1.00th=[ 5538], 5.00th=[10421], 10.00th=[12387], 20.00th=[13304], 00:11:15.367 | 30.00th=[13566], 40.00th=[14877], 50.00th=[16319], 60.00th=[18744], 00:11:15.367 | 70.00th=[22152], 80.00th=[25035], 90.00th=[29754], 95.00th=[36439], 00:11:15.367 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:11:15.367 | 99.99th=[47449] 00:11:15.367 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:11:15.367 slat (usec): min=3, max=18712, avg=136.94, stdev=907.96 00:11:15.367 clat (usec): min=5089, max=66048, avg=18384.66, stdev=10251.13 00:11:15.367 lat (usec): min=5102, max=66114, avg=18521.60, stdev=10331.48 00:11:15.367 clat percentiles (usec): 00:11:15.367 | 1.00th=[ 8225], 5.00th=[ 8979], 10.00th=[10814], 20.00th=[12256], 00:11:15.367 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13698], 60.00th=[14877], 00:11:15.367 | 70.00th=[18744], 80.00th=[25035], 90.00th=[31851], 95.00th=[38011], 00:11:15.367 | 99.00th=[61604], 99.50th=[61604], 99.90th=[62129], 99.95th=[62129], 00:11:15.367 | 99.99th=[65799] 00:11:15.367 bw ( KiB/s): min=11776, max=16648, per=22.12%, avg=14212.00, stdev=3445.02, samples=2 00:11:15.367 iops : min= 2944, max= 4162, avg=3553.00, stdev=861.26, samples=2 00:11:15.367 lat (msec) : 4=0.01%, 10=5.76%, 20=61.87%, 50=30.98%, 100=1.38% 00:11:15.367 cpu : usr=2.79%, sys=4.58%, ctx=259, majf=0, minf=1 00:11:15.367 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:15.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:15.367 issued rwts: total=3169,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.367 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:15.367 job3: (groupid=0, jobs=1): err= 0: pid=833003: Thu Jul 25 19:02:07 2024 00:11:15.367 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:11:15.367 slat (usec): min=3, max=21007, avg=145.74, stdev=950.24 00:11:15.368 clat (usec): min=9315, max=58778, avg=19104.39, stdev=7165.74 00:11:15.368 lat (usec): min=9324, max=58840, avg=19250.13, stdev=7198.64 00:11:15.368 clat percentiles (usec): 00:11:15.368 | 1.00th=[10552], 5.00th=[12125], 10.00th=[13042], 20.00th=[14091], 00:11:15.368 | 30.00th=[15008], 40.00th=[15664], 50.00th=[16450], 60.00th=[17957], 00:11:15.368 | 70.00th=[20841], 80.00th=[23200], 90.00th=[28705], 95.00th=[31851], 00:11:15.368 | 99.00th=[52691], 99.50th=[54264], 99.90th=[58459], 99.95th=[58983], 00:11:15.368 | 99.99th=[58983] 00:11:15.368 write: IOPS=4026, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1005msec); 0 zone resets 00:11:15.368 slat (usec): min=4, max=28890, avg=103.60, stdev=670.58 00:11:15.368 clat (usec): min=731, max=48703, avg=14587.06, stdev=5400.40 00:11:15.368 lat (usec): min=1517, max=48906, avg=14690.66, stdev=5431.23 00:11:15.368 clat percentiles (usec): 00:11:15.368 | 1.00th=[ 4424], 5.00th=[ 8029], 10.00th=[ 9765], 20.00th=[11600], 00:11:15.368 | 30.00th=[13698], 40.00th=[14222], 50.00th=[14615], 60.00th=[15008], 00:11:15.368 | 70.00th=[15533], 80.00th=[16319], 90.00th=[17171], 95.00th=[17957], 00:11:15.368 | 99.00th=[46400], 99.50th=[47449], 99.90th=[48497], 99.95th=[48497], 00:11:15.368 | 99.99th=[48497] 00:11:15.368 bw ( KiB/s): min=14976, max=16384, per=24.40%, avg=15680.00, stdev=995.61, samples=2 00:11:15.368 iops : min= 3744, max= 4096, avg=3920.00, stdev=248.90, samples=2 00:11:15.368 lat (usec) : 750=0.01% 00:11:15.368 lat (msec) : 2=0.13%, 4=0.33%, 10=5.96%, 20=76.32%, 50=16.66% 00:11:15.368 lat (msec) : 100=0.59% 00:11:15.368 cpu : usr=4.58%, sys=7.87%, ctx=431, majf=0, minf=1 00:11:15.368 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:15.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:15.368 issued rwts: total=3584,4047,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.368 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:15.368 00:11:15.368 Run status group 0 (all jobs): 00:11:15.368 READ: bw=58.0MiB/s (60.8MB/s), 12.3MiB/s-15.9MiB/s (12.9MB/s-16.7MB/s), io=58.4MiB (61.2MB), run=1004-1006msec 00:11:15.368 WRITE: bw=62.7MiB/s (65.8MB/s), 13.9MiB/s-17.0MiB/s (14.6MB/s-17.8MB/s), io=63.1MiB (66.2MB), run=1004-1006msec 00:11:15.368 00:11:15.368 Disk stats (read/write): 00:11:15.368 nvme0n1: ios=3227/3584, merge=0/0, ticks=26235/24276, in_queue=50511, util=96.89% 00:11:15.368 nvme0n2: ios=3404/3584, merge=0/0, ticks=32362/27186, in_queue=59548, util=97.66% 00:11:15.368 nvme0n3: ios=2617/2647, merge=0/0, ticks=23757/20875, in_queue=44632, util=97.48% 00:11:15.368 nvme0n4: ios=3109/3297, merge=0/0, ticks=25072/22121, in_queue=47193, util=96.40% 00:11:15.368 19:02:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:15.368 19:02:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=833139 00:11:15.368 19:02:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:15.368 19:02:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:15.368 [global] 00:11:15.368 thread=1 00:11:15.368 invalidate=1 00:11:15.368 rw=read 00:11:15.368 time_based=1 00:11:15.368 runtime=10 00:11:15.368 ioengine=libaio 00:11:15.368 direct=1 00:11:15.368 bs=4096 00:11:15.368 iodepth=1 00:11:15.368 norandommap=1 00:11:15.368 numjobs=1 00:11:15.368 00:11:15.368 [job0] 00:11:15.368 filename=/dev/nvme0n1 00:11:15.368 [job1] 00:11:15.368 filename=/dev/nvme0n2 00:11:15.368 [job2] 00:11:15.368 filename=/dev/nvme0n3 00:11:15.368 [job3] 00:11:15.368 filename=/dev/nvme0n4 00:11:15.368 Could not set queue depth (nvme0n1) 00:11:15.368 Could not set queue depth (nvme0n2) 00:11:15.368 Could not set queue depth (nvme0n3) 00:11:15.368 Could not set queue depth (nvme0n4) 00:11:15.626 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:15.626 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:15.626 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:15.626 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:15.626 fio-3.35 00:11:15.626 Starting 4 threads 00:11:18.905 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:18.905 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:18.905 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=22798336, buflen=4096 00:11:18.905 fio: pid=833230, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:18.905 19:02:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:18.905 19:02:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:18.905 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=26193920, buflen=4096 00:11:18.905 fio: pid=833229, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:19.162 19:02:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:19.162 19:02:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:19.162 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=17002496, buflen=4096 00:11:19.162 fio: pid=833227, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:19.419 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=1839104, buflen=4096 00:11:19.419 fio: pid=833228, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:19.419 19:02:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:19.419 19:02:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:19.419 00:11:19.419 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=833227: Thu Jul 25 19:02:11 2024 00:11:19.419 read: IOPS=1212, BW=4848KiB/s (4964kB/s)(16.2MiB/3425msec) 00:11:19.419 slat (usec): min=4, max=9798, avg=23.13, stdev=204.95 00:11:19.419 clat (usec): min=282, max=42111, avg=791.84, stdev=3922.64 00:11:19.419 lat (usec): min=287, max=42129, avg=814.98, stdev=3927.66 00:11:19.419 clat percentiles (usec): 00:11:19.419 | 1.00th=[ 289], 5.00th=[ 302], 10.00th=[ 322], 20.00th=[ 343], 00:11:19.419 | 30.00th=[ 375], 40.00th=[ 400], 50.00th=[ 420], 60.00th=[ 433], 00:11:19.419 | 70.00th=[ 453], 80.00th=[ 474], 90.00th=[ 515], 95.00th=[ 545], 00:11:19.419 | 99.00th=[ 701], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:11:19.419 | 99.99th=[42206] 00:11:19.419 bw ( KiB/s): min= 96, max= 8904, per=28.28%, avg=5072.00, stdev=3607.96, samples=6 00:11:19.419 iops : min= 24, max= 2226, avg=1268.00, stdev=901.99, samples=6 00:11:19.419 lat (usec) : 500=87.02%, 750=11.99%, 1000=0.02% 00:11:19.419 lat (msec) : 4=0.02%, 50=0.92% 00:11:19.419 cpu : usr=0.70%, sys=2.86%, ctx=4155, majf=0, minf=1 00:11:19.419 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.419 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.419 issued rwts: total=4152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.419 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.419 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=833228: Thu Jul 25 19:02:11 2024 00:11:19.419 read: IOPS=121, BW=486KiB/s (498kB/s)(1796KiB/3694msec) 00:11:19.419 slat (usec): min=4, max=13870, avg=43.36, stdev=653.32 00:11:19.419 clat (usec): min=348, max=45967, avg=8130.44, stdev=16075.80 00:11:19.419 lat (usec): min=353, max=45987, avg=8173.87, stdev=16078.20 00:11:19.419 clat percentiles (usec): 00:11:19.419 | 1.00th=[ 359], 5.00th=[ 363], 10.00th=[ 367], 20.00th=[ 383], 00:11:19.419 | 30.00th=[ 404], 40.00th=[ 408], 50.00th=[ 416], 60.00th=[ 445], 00:11:19.419 | 70.00th=[ 502], 80.00th=[ 586], 90.00th=[41681], 95.00th=[42206], 00:11:19.419 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:11:19.419 | 99.99th=[45876] 00:11:19.419 bw ( KiB/s): min= 96, max= 1222, per=1.45%, avg=260.29, stdev=424.10, samples=7 00:11:19.419 iops : min= 24, max= 305, avg=65.00, stdev=105.83, samples=7 00:11:19.419 lat (usec) : 500=69.78%, 750=10.89%, 1000=0.22% 00:11:19.419 lat (msec) : 2=0.22%, 50=18.67% 00:11:19.419 cpu : usr=0.08%, sys=0.19%, ctx=452, majf=0, minf=1 00:11:19.419 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.419 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.419 issued rwts: total=450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.419 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.419 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=833229: Thu Jul 25 19:02:11 2024 00:11:19.419 read: IOPS=2024, BW=8095KiB/s (8289kB/s)(25.0MiB/3160msec) 00:11:19.419 slat (nsec): min=5245, max=67911, avg=17022.50, stdev=10321.14 00:11:19.419 clat (usec): min=313, max=42571, avg=469.28, stdev=1352.23 00:11:19.419 lat (usec): min=320, max=42592, avg=486.30, stdev=1352.38 00:11:19.419 clat percentiles (usec): 00:11:19.419 | 1.00th=[ 326], 5.00th=[ 338], 10.00th=[ 351], 20.00th=[ 367], 00:11:19.419 | 30.00th=[ 379], 40.00th=[ 388], 50.00th=[ 400], 60.00th=[ 412], 00:11:19.419 | 70.00th=[ 433], 80.00th=[ 461], 90.00th=[ 519], 95.00th=[ 619], 00:11:19.419 | 99.00th=[ 750], 99.50th=[ 766], 99.90th=[41157], 99.95th=[41157], 00:11:19.419 | 99.99th=[42730] 00:11:19.419 bw ( KiB/s): min= 2912, max= 9816, per=45.20%, avg=8106.67, stdev=2714.41, samples=6 00:11:19.419 iops : min= 728, max= 2454, avg=2026.67, stdev=678.60, samples=6 00:11:19.419 lat (usec) : 500=87.79%, 750=11.05%, 1000=1.02% 00:11:19.419 lat (msec) : 10=0.02%, 50=0.11% 00:11:19.419 cpu : usr=1.87%, sys=3.96%, ctx=6396, majf=0, minf=1 00:11:19.419 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.420 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.420 issued rwts: total=6396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.420 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.420 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=833230: Thu Jul 25 19:02:11 2024 00:11:19.420 read: IOPS=1909, BW=7638KiB/s (7821kB/s)(21.7MiB/2915msec) 00:11:19.420 slat (nsec): min=6656, max=70854, avg=17393.39, stdev=9236.50 00:11:19.420 clat (usec): min=324, max=42492, avg=495.79, stdev=1132.41 00:11:19.420 lat (usec): min=331, max=42504, avg=513.19, stdev=1132.41 00:11:19.420 clat percentiles (usec): 00:11:19.420 | 1.00th=[ 338], 5.00th=[ 351], 10.00th=[ 363], 20.00th=[ 392], 00:11:19.420 | 30.00th=[ 424], 40.00th=[ 449], 50.00th=[ 461], 60.00th=[ 478], 00:11:19.420 | 70.00th=[ 490], 80.00th=[ 506], 90.00th=[ 529], 95.00th=[ 668], 00:11:19.420 | 99.00th=[ 750], 99.50th=[ 766], 99.90th=[ 1029], 99.95th=[41157], 00:11:19.420 | 99.99th=[42730] 00:11:19.420 bw ( KiB/s): min= 5176, max= 9768, per=42.64%, avg=7646.40, stdev=1632.88, samples=5 00:11:19.420 iops : min= 1294, max= 2442, avg=1911.60, stdev=408.22, samples=5 00:11:19.420 lat (usec) : 500=77.17%, 750=21.65%, 1000=1.06% 00:11:19.420 lat (msec) : 2=0.02%, 50=0.09% 00:11:19.420 cpu : usr=1.78%, sys=4.22%, ctx=5567, majf=0, minf=1 00:11:19.420 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.420 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.420 issued rwts: total=5567,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.420 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.420 00:11:19.420 Run status group 0 (all jobs): 00:11:19.420 READ: bw=17.5MiB/s (18.4MB/s), 486KiB/s-8095KiB/s (498kB/s-8289kB/s), io=64.7MiB (67.8MB), run=2915-3694msec 00:11:19.420 00:11:19.420 Disk stats (read/write): 00:11:19.420 nvme0n1: ios=4141/0, merge=0/0, ticks=4289/0, in_queue=4289, util=98.77% 00:11:19.420 nvme0n2: ios=255/0, merge=0/0, ticks=3573/0, in_queue=3573, util=96.00% 00:11:19.420 nvme0n3: ios=6287/0, merge=0/0, ticks=2843/0, in_queue=2843, util=96.72% 00:11:19.420 nvme0n4: ios=5463/0, merge=0/0, ticks=2628/0, in_queue=2628, util=96.70% 00:11:19.677 19:02:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:19.677 19:02:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:19.935 19:02:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:19.935 19:02:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:20.192 19:02:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:20.192 19:02:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:20.450 19:02:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:20.450 19:02:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:20.708 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:20.708 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 833139 00:11:20.708 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:20.708 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:20.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.965 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:20.965 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:20.965 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:20.965 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.965 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:20.965 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.965 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:20.965 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:20.965 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:20.965 nvmf hotplug test: fio failed as expected 00:11:20.965 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:21.222 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:21.222 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:21.222 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:21.222 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:21.222 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:21.222 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:21.222 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:21.222 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:21.222 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:21.222 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:21.222 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:21.222 rmmod nvme_tcp 00:11:21.222 rmmod nvme_fabrics 00:11:21.222 rmmod nvme_keyring 00:11:21.222 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:21.222 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:21.222 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:21.222 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 831094 ']' 00:11:21.222 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 831094 00:11:21.222 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 831094 ']' 00:11:21.222 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 831094 00:11:21.222 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:21.222 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:21.222 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 831094 00:11:21.222 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:21.222 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:21.222 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 831094' 00:11:21.222 killing process with pid 831094 00:11:21.222 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 831094 00:11:21.222 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 831094 00:11:21.480 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:21.480 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:21.480 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:21.480 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:21.480 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:21.480 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.480 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.480 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.012 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:24.012 00:11:24.012 real 0m24.542s 00:11:24.012 user 1m23.166s 00:11:24.012 sys 0m7.354s 00:11:24.012 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:24.012 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.012 ************************************ 00:11:24.012 END TEST nvmf_fio_target 00:11:24.012 ************************************ 00:11:24.012 19:02:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:24.012 19:02:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:24.012 19:02:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:24.012 19:02:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:24.012 ************************************ 00:11:24.012 START TEST nvmf_bdevio 00:11:24.012 ************************************ 00:11:24.012 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:24.012 * Looking for test storage... 00:11:24.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.012 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:24.012 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:24.012 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.012 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.012 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.012 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.012 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.012 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.012 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.012 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.012 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.012 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:24.013 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:24.013 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:24.013 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:24.013 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:24.013 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.013 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.013 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.013 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:24.013 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:24.013 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:11:24.013 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:26.544 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:26.544 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:26.544 Found net devices under 0000:09:00.0: cvl_0_0 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:26.544 Found net devices under 0000:09:00.1: cvl_0_1 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:26.544 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:26.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:11:26.545 00:11:26.545 --- 10.0.0.2 ping statistics --- 00:11:26.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.545 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:26.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:11:26.545 00:11:26.545 --- 10.0.0.1 ping statistics --- 00:11:26.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.545 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=836274 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 836274 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 836274 ']' 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:26.545 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.545 [2024-07-25 19:02:18.715538] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:26.545 [2024-07-25 19:02:18.715608] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.545 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.545 [2024-07-25 19:02:18.786489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.545 [2024-07-25 19:02:18.899241] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.545 [2024-07-25 19:02:18.899299] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.545 [2024-07-25 19:02:18.899313] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.545 [2024-07-25 19:02:18.899325] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.545 [2024-07-25 19:02:18.899341] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.545 [2024-07-25 19:02:18.899427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:26.545 [2024-07-25 19:02:18.899492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:26.545 [2024-07-25 19:02:18.899556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:26.545 [2024-07-25 19:02:18.899559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.503 [2024-07-25 19:02:19.729904] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.503 Malloc0 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.503 [2024-07-25 19:02:19.783632] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:27.503 { 00:11:27.503 "params": { 00:11:27.503 "name": "Nvme$subsystem", 00:11:27.503 "trtype": "$TEST_TRANSPORT", 00:11:27.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:27.503 "adrfam": "ipv4", 00:11:27.503 "trsvcid": "$NVMF_PORT", 00:11:27.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:27.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:27.503 "hdgst": ${hdgst:-false}, 00:11:27.503 "ddgst": ${ddgst:-false} 00:11:27.503 }, 00:11:27.503 "method": "bdev_nvme_attach_controller" 00:11:27.503 } 00:11:27.503 EOF 00:11:27.503 )") 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:27.503 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:27.503 "params": { 00:11:27.503 "name": "Nvme1", 00:11:27.503 "trtype": "tcp", 00:11:27.503 "traddr": "10.0.0.2", 00:11:27.503 "adrfam": "ipv4", 00:11:27.503 "trsvcid": "4420", 00:11:27.503 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:27.503 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:27.503 "hdgst": false, 00:11:27.503 "ddgst": false 00:11:27.503 }, 00:11:27.503 "method": "bdev_nvme_attach_controller" 00:11:27.503 }' 00:11:27.503 [2024-07-25 19:02:19.830200] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:27.503 [2024-07-25 19:02:19.830280] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid836432 ] 00:11:27.503 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.503 [2024-07-25 19:02:19.901393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:27.769 [2024-07-25 19:02:20.022694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.769 [2024-07-25 19:02:20.022742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.769 [2024-07-25 19:02:20.022745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.027 I/O targets: 00:11:28.027 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:28.027 00:11:28.027 00:11:28.027 CUnit - A unit testing framework for C - Version 2.1-3 00:11:28.027 http://cunit.sourceforge.net/ 00:11:28.027 00:11:28.027 00:11:28.027 Suite: bdevio tests on: Nvme1n1 00:11:28.027 Test: blockdev write read block ...passed 00:11:28.027 Test: blockdev write zeroes read block ...passed 00:11:28.027 Test: blockdev write zeroes read no split ...passed 00:11:28.027 Test: blockdev write zeroes read split ...passed 00:11:28.285 Test: blockdev write zeroes read split partial ...passed 00:11:28.285 Test: blockdev reset ...[2024-07-25 19:02:20.546010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:28.285 [2024-07-25 19:02:20.546117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f83580 (9): Bad file descriptor 00:11:28.285 [2024-07-25 19:02:20.696433] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:28.285 passed 00:11:28.285 Test: blockdev write read 8 blocks ...passed 00:11:28.285 Test: blockdev write read size > 128k ...passed 00:11:28.285 Test: blockdev write read invalid size ...passed 00:11:28.544 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:28.544 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:28.544 Test: blockdev write read max offset ...passed 00:11:28.544 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:28.544 Test: blockdev writev readv 8 blocks ...passed 00:11:28.544 Test: blockdev writev readv 30 x 1block ...passed 00:11:28.544 Test: blockdev writev readv block ...passed 00:11:28.544 Test: blockdev writev readv size > 128k ...passed 00:11:28.544 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:28.544 Test: blockdev comparev and writev ...[2024-07-25 19:02:20.915906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:28.544 [2024-07-25 19:02:20.915941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:28.544 [2024-07-25 19:02:20.915966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:28.544 [2024-07-25 19:02:20.915983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:28.544 [2024-07-25 19:02:20.916393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:28.544 [2024-07-25 19:02:20.916416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:28.544 [2024-07-25 19:02:20.916438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:28.544 [2024-07-25 19:02:20.916454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:28.544 [2024-07-25 19:02:20.916885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:28.544 [2024-07-25 19:02:20.916908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:28.544 [2024-07-25 19:02:20.916930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:28.544 [2024-07-25 19:02:20.916945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:28.544 [2024-07-25 19:02:20.917346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:28.544 [2024-07-25 19:02:20.917371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:28.544 [2024-07-25 19:02:20.917392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:28.544 [2024-07-25 19:02:20.917407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:28.544 passed 00:11:28.544 Test: blockdev nvme passthru rw ...passed 00:11:28.544 Test: blockdev nvme passthru vendor specific ...[2024-07-25 19:02:21.000495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:28.544 [2024-07-25 19:02:21.000521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:28.544 [2024-07-25 19:02:21.000719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:28.544 [2024-07-25 19:02:21.000742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:28.544 [2024-07-25 19:02:21.000934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:28.544 [2024-07-25 19:02:21.000956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:28.544 [2024-07-25 19:02:21.001154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:28.544 [2024-07-25 19:02:21.001177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:28.544 passed 00:11:28.803 Test: blockdev nvme admin passthru ...passed 00:11:28.803 Test: blockdev copy ...passed 00:11:28.803 00:11:28.803 Run Summary: Type Total Ran Passed Failed Inactive 00:11:28.803 suites 1 1 n/a 0 0 00:11:28.803 tests 23 23 23 0 0 00:11:28.803 asserts 152 152 152 0 n/a 00:11:28.803 00:11:28.803 Elapsed time = 1.443 seconds 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:29.063 rmmod nvme_tcp 00:11:29.063 rmmod nvme_fabrics 00:11:29.063 rmmod nvme_keyring 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 836274 ']' 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 836274 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 836274 ']' 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 836274 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 836274 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 836274' 00:11:29.063 killing process with pid 836274 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 836274 00:11:29.063 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 836274 00:11:29.322 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:29.322 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:29.322 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:29.322 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:29.322 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:29.322 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.322 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.322 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:31.859 00:11:31.859 real 0m7.838s 00:11:31.859 user 0m15.032s 00:11:31.859 sys 0m2.491s 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:31.859 ************************************ 00:11:31.859 END TEST nvmf_bdevio 00:11:31.859 ************************************ 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:31.859 00:11:31.859 real 4m5.844s 00:11:31.859 user 10m25.656s 00:11:31.859 sys 1m12.244s 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:31.859 ************************************ 00:11:31.859 END TEST nvmf_target_core 00:11:31.859 ************************************ 00:11:31.859 19:02:23 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:31.859 19:02:23 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:31.859 19:02:23 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:31.859 19:02:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:31.859 ************************************ 00:11:31.859 START TEST nvmf_target_extra 00:11:31.859 ************************************ 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:31.859 * Looking for test storage... 00:11:31.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:31.859 ************************************ 00:11:31.859 START TEST nvmf_example 00:11:31.859 ************************************ 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:31.859 * Looking for test storage... 00:11:31.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.859 19:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:31.859 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:31.859 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.859 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.859 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.859 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.859 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.859 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.859 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:11:31.860 19:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:34.392 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:34.392 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:34.393 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:34.393 Found net devices under 0000:09:00.0: cvl_0_0 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:34.393 Found net devices under 0000:09:00.1: cvl_0_1 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:34.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:11:34.393 00:11:34.393 --- 10.0.0.2 ping statistics --- 00:11:34.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.393 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:34.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:11:34.393 00:11:34.393 --- 10.0.0.1 ping statistics --- 00:11:34.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.393 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=838965 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 838965 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 838965 ']' 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:34.393 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.393 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.326 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:35.326 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:35.326 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:35.326 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:35.326 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.326 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:35.326 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.326 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.326 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.326 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:35.326 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.326 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.327 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.327 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:35.327 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:35.327 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.327 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.327 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.327 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:35.327 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:35.327 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.327 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.327 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.327 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.327 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.327 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.327 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.327 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:35.327 19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:35.327 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.524 Initializing NVMe Controllers 00:11:47.524 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:47.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:47.524 Initialization complete. Launching workers. 00:11:47.524 ======================================================== 00:11:47.524 Latency(us) 00:11:47.524 Device Information : IOPS MiB/s Average min max 00:11:47.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13303.50 51.97 4811.68 913.78 16233.92 00:11:47.524 ======================================================== 00:11:47.524 Total : 13303.50 51.97 4811.68 913.78 16233.92 00:11:47.524 00:11:47.524 19:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:47.524 19:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:47.524 19:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:47.524 19:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:11:47.524 19:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:47.524 19:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:11:47.524 19:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:47.524 19:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:47.524 rmmod nvme_tcp 00:11:47.524 rmmod nvme_fabrics 00:11:47.524 rmmod nvme_keyring 00:11:47.524 19:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:47.524 19:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:11:47.524 19:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:11:47.524 19:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 838965 ']' 00:11:47.524 19:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 838965 00:11:47.524 19:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 838965 ']' 00:11:47.524 19:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 838965 00:11:47.524 19:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:47.525 19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:47.525 19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 838965 00:11:47.525 19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:47.525 19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:47.525 19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 838965' 00:11:47.525 killing process with pid 838965 00:11:47.525 19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 838965 00:11:47.525 19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 838965 00:11:47.525 nvmf threads initialize successfully 00:11:47.525 bdev subsystem init successfully 00:11:47.525 created a nvmf target service 00:11:47.525 create targets's poll groups done 00:11:47.525 all subsystems of target started 00:11:47.525 nvmf target is running 00:11:47.525 all subsystems of target stopped 00:11:47.525 destroy targets's poll groups done 00:11:47.525 destroyed the nvmf target service 00:11:47.525 bdev subsystem finish successfully 00:11:47.525 nvmf threads destroy successfully 00:11:47.525 19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:47.525 19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:47.525 19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:47.525 19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:47.525 19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:47.525 19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.525 19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.525 19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.096 00:11:48.096 real 0m16.396s 00:11:48.096 user 0m39.858s 00:11:48.096 sys 0m5.298s 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.096 ************************************ 00:11:48.096 END TEST nvmf_example 00:11:48.096 ************************************ 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:48.096 ************************************ 00:11:48.096 START TEST nvmf_filesystem 00:11:48.096 ************************************ 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:48.096 * Looking for test storage... 00:11:48.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:11:48.096 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:48.097 #define SPDK_CONFIG_H 00:11:48.097 #define SPDK_CONFIG_APPS 1 00:11:48.097 #define SPDK_CONFIG_ARCH native 00:11:48.097 #undef SPDK_CONFIG_ASAN 00:11:48.097 #undef SPDK_CONFIG_AVAHI 00:11:48.097 #undef SPDK_CONFIG_CET 00:11:48.097 #define SPDK_CONFIG_COVERAGE 1 00:11:48.097 #define SPDK_CONFIG_CROSS_PREFIX 00:11:48.097 #undef SPDK_CONFIG_CRYPTO 00:11:48.097 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:48.097 #undef SPDK_CONFIG_CUSTOMOCF 00:11:48.097 #undef SPDK_CONFIG_DAOS 00:11:48.097 #define SPDK_CONFIG_DAOS_DIR 00:11:48.097 #define SPDK_CONFIG_DEBUG 1 00:11:48.097 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:48.097 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:48.097 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:48.097 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:48.097 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:48.097 #undef SPDK_CONFIG_DPDK_UADK 00:11:48.097 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:48.097 #define SPDK_CONFIG_EXAMPLES 1 00:11:48.097 #undef SPDK_CONFIG_FC 00:11:48.097 #define SPDK_CONFIG_FC_PATH 00:11:48.097 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:48.097 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:48.097 #undef SPDK_CONFIG_FUSE 00:11:48.097 #undef SPDK_CONFIG_FUZZER 00:11:48.097 #define SPDK_CONFIG_FUZZER_LIB 00:11:48.097 #undef SPDK_CONFIG_GOLANG 00:11:48.097 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:48.097 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:48.097 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:48.097 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:48.097 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:48.097 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:48.097 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:48.097 #define SPDK_CONFIG_IDXD 1 00:11:48.097 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:48.097 #undef SPDK_CONFIG_IPSEC_MB 00:11:48.097 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:48.097 #define SPDK_CONFIG_ISAL 1 00:11:48.097 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:48.097 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:48.097 #define SPDK_CONFIG_LIBDIR 00:11:48.097 #undef SPDK_CONFIG_LTO 00:11:48.097 #define SPDK_CONFIG_MAX_LCORES 128 00:11:48.097 #define SPDK_CONFIG_NVME_CUSE 1 00:11:48.097 #undef SPDK_CONFIG_OCF 00:11:48.097 #define SPDK_CONFIG_OCF_PATH 00:11:48.097 #define SPDK_CONFIG_OPENSSL_PATH 00:11:48.097 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:48.097 #define SPDK_CONFIG_PGO_DIR 00:11:48.097 #undef SPDK_CONFIG_PGO_USE 00:11:48.097 #define SPDK_CONFIG_PREFIX /usr/local 00:11:48.097 #undef SPDK_CONFIG_RAID5F 00:11:48.097 #undef SPDK_CONFIG_RBD 00:11:48.097 #define SPDK_CONFIG_RDMA 1 00:11:48.097 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:48.097 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:48.097 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:48.097 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:48.097 #define SPDK_CONFIG_SHARED 1 00:11:48.097 #undef SPDK_CONFIG_SMA 00:11:48.097 #define SPDK_CONFIG_TESTS 1 00:11:48.097 #undef SPDK_CONFIG_TSAN 00:11:48.097 #define SPDK_CONFIG_UBLK 1 00:11:48.097 #define SPDK_CONFIG_UBSAN 1 00:11:48.097 #undef SPDK_CONFIG_UNIT_TESTS 00:11:48.097 #undef SPDK_CONFIG_URING 00:11:48.097 #define SPDK_CONFIG_URING_PATH 00:11:48.097 #undef SPDK_CONFIG_URING_ZNS 00:11:48.097 #undef SPDK_CONFIG_USDT 00:11:48.097 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:48.097 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:48.097 #define SPDK_CONFIG_VFIO_USER 1 00:11:48.097 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:48.097 #define SPDK_CONFIG_VHOST 1 00:11:48.097 #define SPDK_CONFIG_VIRTIO 1 00:11:48.097 #undef SPDK_CONFIG_VTUNE 00:11:48.097 #define SPDK_CONFIG_VTUNE_DIR 00:11:48.097 #define SPDK_CONFIG_WERROR 1 00:11:48.097 #define SPDK_CONFIG_WPDK_DIR 00:11:48.097 #undef SPDK_CONFIG_XNVME 00:11:48.097 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.097 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:11:48.098 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:48.099 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j48 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 840666 ]] 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 840666 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.GDxysT 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.GDxysT/tests/target /tmp/spdk.GDxysT 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=952066048 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4332363776 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=51217850368 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=61994708992 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=10776858624 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:48.100 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30986096640 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997352448 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=11255808 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12376334336 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12398944256 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=22609920 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30995988480 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997356544 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=1368064 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6199463936 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6199468032 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:11:48.101 * Looking for test storage... 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=51217850368 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=12991451136 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.101 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:48.102 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:50.635 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:50.635 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:50.635 Found net devices under 0000:09:00.0: cvl_0_0 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:50.635 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.636 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:50.636 Found net devices under 0000:09:00.1: cvl_0_1 00:11:50.636 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.636 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:50.636 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:50.636 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:50.636 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:50.636 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:50.636 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:50.636 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:50.636 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:50.636 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:50.636 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:50.636 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:50.636 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:50.636 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:50.636 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:50.636 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:50.636 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:50.636 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:50.636 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:50.636 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:50.636 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:50.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:11:50.894 00:11:50.894 --- 10.0.0.2 ping statistics --- 00:11:50.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.894 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:50.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:11:50.894 00:11:50.894 --- 10.0.0.1 ping statistics --- 00:11:50.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.894 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:50.894 ************************************ 00:11:50.894 START TEST nvmf_filesystem_no_in_capsule 00:11:50.894 ************************************ 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=842587 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:50.894 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 842587 00:11:50.895 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 842587 ']' 00:11:50.895 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.895 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:50.895 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.895 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:50.895 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.895 [2024-07-25 19:02:43.266081] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:50.895 [2024-07-25 19:02:43.266190] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.895 EAL: No free 2048 kB hugepages reported on node 1 00:11:50.895 [2024-07-25 19:02:43.341018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:51.152 [2024-07-25 19:02:43.454442] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.152 [2024-07-25 19:02:43.454494] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.152 [2024-07-25 19:02:43.454510] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.152 [2024-07-25 19:02:43.454523] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.152 [2024-07-25 19:02:43.454535] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.152 [2024-07-25 19:02:43.454608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.152 [2024-07-25 19:02:43.454674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.152 [2024-07-25 19:02:43.454764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.152 [2024-07-25 19:02:43.454766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.152 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:51.152 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:51.152 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:51.153 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:51.153 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.153 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.153 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:51.153 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:51.153 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.153 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.153 [2024-07-25 19:02:43.612628] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.153 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.153 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:51.153 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.153 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.410 Malloc1 00:11:51.410 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.410 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:51.410 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.410 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.410 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.410 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:51.410 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.410 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.410 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.410 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.410 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.410 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.410 [2024-07-25 19:02:43.801432] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.410 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.410 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:51.410 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:51.410 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:51.410 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:51.410 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:51.410 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:51.410 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.411 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.411 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.411 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:51.411 { 00:11:51.411 "name": "Malloc1", 00:11:51.411 "aliases": [ 00:11:51.411 "62fbb846-aa1e-4a8e-b1d6-66b6b2e749b3" 00:11:51.411 ], 00:11:51.411 "product_name": "Malloc disk", 00:11:51.411 "block_size": 512, 00:11:51.411 "num_blocks": 1048576, 00:11:51.411 "uuid": "62fbb846-aa1e-4a8e-b1d6-66b6b2e749b3", 00:11:51.411 "assigned_rate_limits": { 00:11:51.411 "rw_ios_per_sec": 0, 00:11:51.411 "rw_mbytes_per_sec": 0, 00:11:51.411 "r_mbytes_per_sec": 0, 00:11:51.411 "w_mbytes_per_sec": 0 00:11:51.411 }, 00:11:51.411 "claimed": true, 00:11:51.411 "claim_type": "exclusive_write", 00:11:51.411 "zoned": false, 00:11:51.411 "supported_io_types": { 00:11:51.411 "read": true, 00:11:51.411 "write": true, 00:11:51.411 "unmap": true, 00:11:51.411 "flush": true, 00:11:51.411 "reset": true, 00:11:51.411 "nvme_admin": false, 00:11:51.411 "nvme_io": false, 00:11:51.411 "nvme_io_md": false, 00:11:51.411 "write_zeroes": true, 00:11:51.411 "zcopy": true, 00:11:51.411 "get_zone_info": false, 00:11:51.411 "zone_management": false, 00:11:51.411 "zone_append": false, 00:11:51.411 "compare": false, 00:11:51.411 "compare_and_write": false, 00:11:51.411 "abort": true, 00:11:51.411 "seek_hole": false, 00:11:51.411 "seek_data": false, 00:11:51.411 "copy": true, 00:11:51.411 "nvme_iov_md": false 00:11:51.411 }, 00:11:51.411 "memory_domains": [ 00:11:51.411 { 00:11:51.411 "dma_device_id": "system", 00:11:51.411 "dma_device_type": 1 00:11:51.411 }, 00:11:51.411 { 00:11:51.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.411 "dma_device_type": 2 00:11:51.411 } 00:11:51.411 ], 00:11:51.411 "driver_specific": {} 00:11:51.411 } 00:11:51.411 ]' 00:11:51.411 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:51.411 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:51.411 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:51.668 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:51.668 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:51.668 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:51.668 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:51.668 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:52.234 19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:52.234 19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:52.234 19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.234 19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:52.234 19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:54.132 19:02:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:54.132 19:02:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:54.132 19:02:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.390 19:02:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:54.390 19:02:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.390 19:02:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:54.390 19:02:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:54.390 19:02:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:54.390 19:02:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:54.390 19:02:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:54.390 19:02:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:54.390 19:02:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:54.390 19:02:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:54.390 19:02:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:54.390 19:02:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:54.390 19:02:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:54.390 19:02:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:54.647 19:02:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:54.904 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:56.277 19:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:56.277 19:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:56.277 19:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:56.277 19:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:56.277 19:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.277 ************************************ 00:11:56.277 START TEST filesystem_ext4 00:11:56.277 ************************************ 00:11:56.277 19:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:56.277 19:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:56.277 19:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:56.277 19:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:56.277 19:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:56.277 19:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:56.277 19:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:56.277 19:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:56.277 19:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:56.277 19:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:56.277 19:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:56.277 mke2fs 1.46.5 (30-Dec-2021) 00:11:56.277 Discarding device blocks: 0/522240 done 00:11:56.277 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:56.277 Filesystem UUID: fad8d4b5-c5a3-4288-bd4c-d7dd1056bae7 00:11:56.277 Superblock backups stored on blocks: 00:11:56.277 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:56.277 00:11:56.277 Allocating group tables: 0/64 done 00:11:56.277 Writing inode tables: 0/64 done 00:11:56.536 Creating journal (8192 blocks): done 00:11:57.504 Writing superblocks and filesystem accounting information: 0/64 done 00:11:57.504 00:11:57.504 19:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:57.504 19:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:58.069 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 842587 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:58.328 00:11:58.328 real 0m2.233s 00:11:58.328 user 0m0.020s 00:11:58.328 sys 0m0.061s 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:58.328 ************************************ 00:11:58.328 END TEST filesystem_ext4 00:11:58.328 ************************************ 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.328 ************************************ 00:11:58.328 START TEST filesystem_btrfs 00:11:58.328 ************************************ 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:58.328 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:58.586 btrfs-progs v6.6.2 00:11:58.586 See https://btrfs.readthedocs.io for more information. 00:11:58.586 00:11:58.586 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:58.586 NOTE: several default settings have changed in version 5.15, please make sure 00:11:58.586 this does not affect your deployments: 00:11:58.586 - DUP for metadata (-m dup) 00:11:58.586 - enabled no-holes (-O no-holes) 00:11:58.586 - enabled free-space-tree (-R free-space-tree) 00:11:58.586 00:11:58.586 Label: (null) 00:11:58.586 UUID: 1be611be-bbed-487f-8b58-ed721229c5a4 00:11:58.586 Node size: 16384 00:11:58.586 Sector size: 4096 00:11:58.586 Filesystem size: 510.00MiB 00:11:58.586 Block group profiles: 00:11:58.586 Data: single 8.00MiB 00:11:58.586 Metadata: DUP 32.00MiB 00:11:58.586 System: DUP 8.00MiB 00:11:58.586 SSD detected: yes 00:11:58.586 Zoned device: no 00:11:58.586 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:58.586 Runtime features: free-space-tree 00:11:58.586 Checksum: crc32c 00:11:58.586 Number of devices: 1 00:11:58.586 Devices: 00:11:58.586 ID SIZE PATH 00:11:58.586 1 510.00MiB /dev/nvme0n1p1 00:11:58.586 00:11:58.586 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:58.586 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 842587 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:58.844 00:11:58.844 real 0m0.593s 00:11:58.844 user 0m0.023s 00:11:58.844 sys 0m0.107s 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:58.844 ************************************ 00:11:58.844 END TEST filesystem_btrfs 00:11:58.844 ************************************ 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.844 ************************************ 00:11:58.844 START TEST filesystem_xfs 00:11:58.844 ************************************ 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:58.844 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:59.102 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:59.102 = sectsz=512 attr=2, projid32bit=1 00:11:59.102 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:59.102 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:59.102 data = bsize=4096 blocks=130560, imaxpct=25 00:11:59.102 = sunit=0 swidth=0 blks 00:11:59.102 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:59.102 log =internal log bsize=4096 blocks=16384, version=2 00:11:59.102 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:59.102 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:00.035 Discarding blocks...Done. 00:12:00.035 19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:00.035 19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:01.931 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:01.931 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:01.931 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:01.931 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:01.931 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:01.931 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:01.931 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 842587 00:12:01.931 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:01.931 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:01.931 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:01.931 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:01.931 00:12:01.931 real 0m3.024s 00:12:01.931 user 0m0.013s 00:12:01.931 sys 0m0.062s 00:12:01.931 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:01.931 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:01.931 ************************************ 00:12:01.931 END TEST filesystem_xfs 00:12:01.931 ************************************ 00:12:01.931 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:02.188 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:02.188 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:02.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.445 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:02.445 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:02.445 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:02.445 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.445 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:02.445 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.445 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:02.445 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.445 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.445 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.445 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.445 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:02.445 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 842587 00:12:02.445 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 842587 ']' 00:12:02.445 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 842587 00:12:02.445 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:02.445 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:02.445 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 842587 00:12:02.445 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:02.445 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:02.445 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 842587' 00:12:02.445 killing process with pid 842587 00:12:02.445 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 842587 00:12:02.445 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 842587 00:12:03.010 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:03.010 00:12:03.010 real 0m12.048s 00:12:03.010 user 0m46.005s 00:12:03.010 sys 0m1.875s 00:12:03.010 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:03.010 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.010 ************************************ 00:12:03.010 END TEST nvmf_filesystem_no_in_capsule 00:12:03.010 ************************************ 00:12:03.010 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:03.010 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:03.010 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:03.010 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:03.010 ************************************ 00:12:03.010 START TEST nvmf_filesystem_in_capsule 00:12:03.010 ************************************ 00:12:03.010 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:12:03.010 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:03.010 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:03.010 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:03.010 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:03.010 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.010 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=844263 00:12:03.010 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:03.010 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 844263 00:12:03.010 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 844263 ']' 00:12:03.010 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.010 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:03.010 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.010 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:03.010 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.010 [2024-07-25 19:02:55.368857] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:03.010 [2024-07-25 19:02:55.368949] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.010 EAL: No free 2048 kB hugepages reported on node 1 00:12:03.010 [2024-07-25 19:02:55.445764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:03.268 [2024-07-25 19:02:55.558977] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.268 [2024-07-25 19:02:55.559027] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.268 [2024-07-25 19:02:55.559056] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.268 [2024-07-25 19:02:55.559074] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.268 [2024-07-25 19:02:55.559085] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.268 [2024-07-25 19:02:55.559160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.268 [2024-07-25 19:02:55.559224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.268 [2024-07-25 19:02:55.559290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:03.268 [2024-07-25 19:02:55.559293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.199 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:04.199 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:04.199 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:04.199 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:04.199 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.199 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.199 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:04.199 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:04.199 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.199 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.199 [2024-07-25 19:02:56.404899] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:04.199 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.199 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:04.199 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.199 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.199 Malloc1 00:12:04.199 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.199 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:04.199 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.200 [2024-07-25 19:02:56.576026] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:04.200 { 00:12:04.200 "name": "Malloc1", 00:12:04.200 "aliases": [ 00:12:04.200 "7773d0a3-b72c-423c-8f9f-07384a876538" 00:12:04.200 ], 00:12:04.200 "product_name": "Malloc disk", 00:12:04.200 "block_size": 512, 00:12:04.200 "num_blocks": 1048576, 00:12:04.200 "uuid": "7773d0a3-b72c-423c-8f9f-07384a876538", 00:12:04.200 "assigned_rate_limits": { 00:12:04.200 "rw_ios_per_sec": 0, 00:12:04.200 "rw_mbytes_per_sec": 0, 00:12:04.200 "r_mbytes_per_sec": 0, 00:12:04.200 "w_mbytes_per_sec": 0 00:12:04.200 }, 00:12:04.200 "claimed": true, 00:12:04.200 "claim_type": "exclusive_write", 00:12:04.200 "zoned": false, 00:12:04.200 "supported_io_types": { 00:12:04.200 "read": true, 00:12:04.200 "write": true, 00:12:04.200 "unmap": true, 00:12:04.200 "flush": true, 00:12:04.200 "reset": true, 00:12:04.200 "nvme_admin": false, 00:12:04.200 "nvme_io": false, 00:12:04.200 "nvme_io_md": false, 00:12:04.200 "write_zeroes": true, 00:12:04.200 "zcopy": true, 00:12:04.200 "get_zone_info": false, 00:12:04.200 "zone_management": false, 00:12:04.200 "zone_append": false, 00:12:04.200 "compare": false, 00:12:04.200 "compare_and_write": false, 00:12:04.200 "abort": true, 00:12:04.200 "seek_hole": false, 00:12:04.200 "seek_data": false, 00:12:04.200 "copy": true, 00:12:04.200 "nvme_iov_md": false 00:12:04.200 }, 00:12:04.200 "memory_domains": [ 00:12:04.200 { 00:12:04.200 "dma_device_id": "system", 00:12:04.200 "dma_device_type": 1 00:12:04.200 }, 00:12:04.200 { 00:12:04.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.200 "dma_device_type": 2 00:12:04.200 } 00:12:04.200 ], 00:12:04.200 "driver_specific": {} 00:12:04.200 } 00:12:04.200 ]' 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:04.200 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:05.131 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:05.131 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:05.131 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:05.131 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:05.131 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:07.026 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:07.026 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:07.026 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:07.026 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:07.027 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:07.027 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:07.027 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:07.027 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:07.027 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:07.027 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:07.027 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:07.027 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:07.027 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:07.027 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:07.027 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:07.027 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:07.027 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:07.027 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:08.397 19:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:09.331 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:09.331 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:09.331 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:09.331 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:09.331 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.331 ************************************ 00:12:09.331 START TEST filesystem_in_capsule_ext4 00:12:09.331 ************************************ 00:12:09.331 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:09.331 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:09.331 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:09.331 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:09.331 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:09.331 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:09.331 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:09.331 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:09.331 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:09.331 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:09.331 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:09.331 mke2fs 1.46.5 (30-Dec-2021) 00:12:09.331 Discarding device blocks: 0/522240 done 00:12:09.331 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:09.331 Filesystem UUID: ed58819f-e7bb-48e8-8bae-34982e1e41b7 00:12:09.331 Superblock backups stored on blocks: 00:12:09.331 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:09.331 00:12:09.331 Allocating group tables: 0/64 done 00:12:09.331 Writing inode tables: 0/64 done 00:12:09.331 Creating journal (8192 blocks): done 00:12:10.412 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:12:10.412 00:12:10.412 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:10.412 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:10.669 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:10.669 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:10.669 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:10.669 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:10.669 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:10.669 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:10.928 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 844263 00:12:10.928 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:10.928 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:10.928 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:10.928 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:10.928 00:12:10.928 real 0m1.657s 00:12:10.928 user 0m0.021s 00:12:10.928 sys 0m0.048s 00:12:10.928 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:10.928 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:10.928 ************************************ 00:12:10.928 END TEST filesystem_in_capsule_ext4 00:12:10.928 ************************************ 00:12:10.928 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:10.928 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:10.928 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:10.928 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:10.928 ************************************ 00:12:10.928 START TEST filesystem_in_capsule_btrfs 00:12:10.928 ************************************ 00:12:10.928 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:10.928 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:10.928 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:10.928 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:10.928 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:10.928 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:10.928 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:10.928 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:10.928 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:10.928 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:10.928 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:11.184 btrfs-progs v6.6.2 00:12:11.184 See https://btrfs.readthedocs.io for more information. 00:12:11.184 00:12:11.184 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:11.184 NOTE: several default settings have changed in version 5.15, please make sure 00:12:11.184 this does not affect your deployments: 00:12:11.184 - DUP for metadata (-m dup) 00:12:11.184 - enabled no-holes (-O no-holes) 00:12:11.184 - enabled free-space-tree (-R free-space-tree) 00:12:11.184 00:12:11.184 Label: (null) 00:12:11.184 UUID: 85a393fd-5276-4fe0-b37e-691c1125d36f 00:12:11.184 Node size: 16384 00:12:11.184 Sector size: 4096 00:12:11.184 Filesystem size: 510.00MiB 00:12:11.184 Block group profiles: 00:12:11.184 Data: single 8.00MiB 00:12:11.184 Metadata: DUP 32.00MiB 00:12:11.184 System: DUP 8.00MiB 00:12:11.184 SSD detected: yes 00:12:11.184 Zoned device: no 00:12:11.184 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:11.184 Runtime features: free-space-tree 00:12:11.184 Checksum: crc32c 00:12:11.184 Number of devices: 1 00:12:11.184 Devices: 00:12:11.184 ID SIZE PATH 00:12:11.184 1 510.00MiB /dev/nvme0n1p1 00:12:11.184 00:12:11.184 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:11.184 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 844263 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:12.117 00:12:12.117 real 0m1.165s 00:12:12.117 user 0m0.020s 00:12:12.117 sys 0m0.120s 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:12.117 ************************************ 00:12:12.117 END TEST filesystem_in_capsule_btrfs 00:12:12.117 ************************************ 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.117 ************************************ 00:12:12.117 START TEST filesystem_in_capsule_xfs 00:12:12.117 ************************************ 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:12.117 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:12.118 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:12.118 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:12.118 = sectsz=512 attr=2, projid32bit=1 00:12:12.118 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:12.118 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:12.118 data = bsize=4096 blocks=130560, imaxpct=25 00:12:12.118 = sunit=0 swidth=0 blks 00:12:12.118 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:12.118 log =internal log bsize=4096 blocks=16384, version=2 00:12:12.118 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:12.118 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:13.050 Discarding blocks...Done. 00:12:13.050 19:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:13.050 19:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:14.949 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:14.949 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:14.949 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:14.949 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:14.949 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:14.949 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:14.949 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 844263 00:12:14.949 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:14.949 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:14.949 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:14.949 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:14.949 00:12:14.949 real 0m2.937s 00:12:14.949 user 0m0.016s 00:12:14.949 sys 0m0.053s 00:12:14.949 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:14.949 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:14.949 ************************************ 00:12:14.949 END TEST filesystem_in_capsule_xfs 00:12:14.949 ************************************ 00:12:14.949 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:15.206 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:15.206 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.465 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:15.465 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:15.465 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:15.465 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.465 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:15.465 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.465 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:15.465 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.465 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.465 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.465 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.465 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:15.465 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 844263 00:12:15.465 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 844263 ']' 00:12:15.465 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 844263 00:12:15.465 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:15.465 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:15.465 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 844263 00:12:15.465 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:15.465 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:15.465 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 844263' 00:12:15.465 killing process with pid 844263 00:12:15.465 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 844263 00:12:15.465 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 844263 00:12:16.031 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:16.031 00:12:16.031 real 0m12.991s 00:12:16.031 user 0m49.876s 00:12:16.031 sys 0m1.939s 00:12:16.031 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:16.031 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.031 ************************************ 00:12:16.031 END TEST nvmf_filesystem_in_capsule 00:12:16.031 ************************************ 00:12:16.031 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:16.031 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:16.031 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:12:16.031 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:16.031 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:12:16.031 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:16.031 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:16.031 rmmod nvme_tcp 00:12:16.031 rmmod nvme_fabrics 00:12:16.031 rmmod nvme_keyring 00:12:16.031 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:16.031 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:12:16.031 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:12:16.031 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:16.031 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:16.031 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:16.031 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:16.031 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:16.031 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:16.031 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.031 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.031 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.989 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:17.989 00:12:17.989 real 0m30.060s 00:12:17.989 user 1m36.944s 00:12:17.989 sys 0m5.773s 00:12:17.989 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:17.989 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:17.989 ************************************ 00:12:17.989 END TEST nvmf_filesystem 00:12:17.989 ************************************ 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:18.249 ************************************ 00:12:18.249 START TEST nvmf_target_discovery 00:12:18.249 ************************************ 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:18.249 * Looking for test storage... 00:12:18.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.249 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.250 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:18.250 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:18.250 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:12:18.250 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:20.786 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:20.786 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:20.786 Found net devices under 0000:09:00.0: cvl_0_0 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:20.786 Found net devices under 0000:09:00.1: cvl_0_1 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:20.786 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:20.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:12:20.787 00:12:20.787 --- 10.0.0.2 ping statistics --- 00:12:20.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.787 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:20.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:12:20.787 00:12:20.787 --- 10.0.0.1 ping statistics --- 00:12:20.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.787 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=848903 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 848903 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 848903 ']' 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:20.787 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.787 [2024-07-25 19:03:13.232011] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:20.787 [2024-07-25 19:03:13.232111] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.091 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.091 [2024-07-25 19:03:13.313107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.091 [2024-07-25 19:03:13.435176] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.091 [2024-07-25 19:03:13.435257] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.091 [2024-07-25 19:03:13.435274] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.091 [2024-07-25 19:03:13.435288] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.091 [2024-07-25 19:03:13.435300] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.091 [2024-07-25 19:03:13.435363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.091 [2024-07-25 19:03:13.435426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.091 [2024-07-25 19:03:13.435478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.091 [2024-07-25 19:03:13.435481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.025 [2024-07-25 19:03:14.190916] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.025 Null1 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.025 [2024-07-25 19:03:14.231238] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.025 Null2 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.025 Null3 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.025 Null4 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.025 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:12:22.026 00:12:22.026 Discovery Log Number of Records 6, Generation counter 6 00:12:22.026 =====Discovery Log Entry 0====== 00:12:22.026 trtype: tcp 00:12:22.026 adrfam: ipv4 00:12:22.026 subtype: current discovery subsystem 00:12:22.026 treq: not required 00:12:22.026 portid: 0 00:12:22.026 trsvcid: 4420 00:12:22.026 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:22.026 traddr: 10.0.0.2 00:12:22.026 eflags: explicit discovery connections, duplicate discovery information 00:12:22.026 sectype: none 00:12:22.026 =====Discovery Log Entry 1====== 00:12:22.026 trtype: tcp 00:12:22.026 adrfam: ipv4 00:12:22.026 subtype: nvme subsystem 00:12:22.026 treq: not required 00:12:22.026 portid: 0 00:12:22.026 trsvcid: 4420 00:12:22.026 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:22.026 traddr: 10.0.0.2 00:12:22.026 eflags: none 00:12:22.026 sectype: none 00:12:22.026 =====Discovery Log Entry 2====== 00:12:22.026 trtype: tcp 00:12:22.026 adrfam: ipv4 00:12:22.026 subtype: nvme subsystem 00:12:22.026 treq: not required 00:12:22.026 portid: 0 00:12:22.026 trsvcid: 4420 00:12:22.026 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:22.026 traddr: 10.0.0.2 00:12:22.026 eflags: none 00:12:22.026 sectype: none 00:12:22.026 =====Discovery Log Entry 3====== 00:12:22.026 trtype: tcp 00:12:22.026 adrfam: ipv4 00:12:22.026 subtype: nvme subsystem 00:12:22.026 treq: not required 00:12:22.026 portid: 0 00:12:22.026 trsvcid: 4420 00:12:22.026 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:22.026 traddr: 10.0.0.2 00:12:22.026 eflags: none 00:12:22.026 sectype: none 00:12:22.026 =====Discovery Log Entry 4====== 00:12:22.026 trtype: tcp 00:12:22.026 adrfam: ipv4 00:12:22.026 subtype: nvme subsystem 00:12:22.026 treq: not required 00:12:22.026 portid: 0 00:12:22.026 trsvcid: 4420 00:12:22.026 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:22.026 traddr: 10.0.0.2 00:12:22.026 eflags: none 00:12:22.026 sectype: none 00:12:22.026 =====Discovery Log Entry 5====== 00:12:22.026 trtype: tcp 00:12:22.026 adrfam: ipv4 00:12:22.026 subtype: discovery subsystem referral 00:12:22.026 treq: not required 00:12:22.026 portid: 0 00:12:22.026 trsvcid: 4430 00:12:22.026 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:22.026 traddr: 10.0.0.2 00:12:22.026 eflags: none 00:12:22.026 sectype: none 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:22.026 Perform nvmf subsystem discovery via RPC 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.026 [ 00:12:22.026 { 00:12:22.026 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:22.026 "subtype": "Discovery", 00:12:22.026 "listen_addresses": [ 00:12:22.026 { 00:12:22.026 "trtype": "TCP", 00:12:22.026 "adrfam": "IPv4", 00:12:22.026 "traddr": "10.0.0.2", 00:12:22.026 "trsvcid": "4420" 00:12:22.026 } 00:12:22.026 ], 00:12:22.026 "allow_any_host": true, 00:12:22.026 "hosts": [] 00:12:22.026 }, 00:12:22.026 { 00:12:22.026 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:22.026 "subtype": "NVMe", 00:12:22.026 "listen_addresses": [ 00:12:22.026 { 00:12:22.026 "trtype": "TCP", 00:12:22.026 "adrfam": "IPv4", 00:12:22.026 "traddr": "10.0.0.2", 00:12:22.026 "trsvcid": "4420" 00:12:22.026 } 00:12:22.026 ], 00:12:22.026 "allow_any_host": true, 00:12:22.026 "hosts": [], 00:12:22.026 "serial_number": "SPDK00000000000001", 00:12:22.026 "model_number": "SPDK bdev Controller", 00:12:22.026 "max_namespaces": 32, 00:12:22.026 "min_cntlid": 1, 00:12:22.026 "max_cntlid": 65519, 00:12:22.026 "namespaces": [ 00:12:22.026 { 00:12:22.026 "nsid": 1, 00:12:22.026 "bdev_name": "Null1", 00:12:22.026 "name": "Null1", 00:12:22.026 "nguid": "5CF90A9E2E9B40C683254577D64D21DD", 00:12:22.026 "uuid": "5cf90a9e-2e9b-40c6-8325-4577d64d21dd" 00:12:22.026 } 00:12:22.026 ] 00:12:22.026 }, 00:12:22.026 { 00:12:22.026 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:22.026 "subtype": "NVMe", 00:12:22.026 "listen_addresses": [ 00:12:22.026 { 00:12:22.026 "trtype": "TCP", 00:12:22.026 "adrfam": "IPv4", 00:12:22.026 "traddr": "10.0.0.2", 00:12:22.026 "trsvcid": "4420" 00:12:22.026 } 00:12:22.026 ], 00:12:22.026 "allow_any_host": true, 00:12:22.026 "hosts": [], 00:12:22.026 "serial_number": "SPDK00000000000002", 00:12:22.026 "model_number": "SPDK bdev Controller", 00:12:22.026 "max_namespaces": 32, 00:12:22.026 "min_cntlid": 1, 00:12:22.026 "max_cntlid": 65519, 00:12:22.026 "namespaces": [ 00:12:22.026 { 00:12:22.026 "nsid": 1, 00:12:22.026 "bdev_name": "Null2", 00:12:22.026 "name": "Null2", 00:12:22.026 "nguid": "1B3F7C2975784317B78FC7CB332ADED7", 00:12:22.026 "uuid": "1b3f7c29-7578-4317-b78f-c7cb332aded7" 00:12:22.026 } 00:12:22.026 ] 00:12:22.026 }, 00:12:22.026 { 00:12:22.026 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:22.026 "subtype": "NVMe", 00:12:22.026 "listen_addresses": [ 00:12:22.026 { 00:12:22.026 "trtype": "TCP", 00:12:22.026 "adrfam": "IPv4", 00:12:22.026 "traddr": "10.0.0.2", 00:12:22.026 "trsvcid": "4420" 00:12:22.026 } 00:12:22.026 ], 00:12:22.026 "allow_any_host": true, 00:12:22.026 "hosts": [], 00:12:22.026 "serial_number": "SPDK00000000000003", 00:12:22.026 "model_number": "SPDK bdev Controller", 00:12:22.026 "max_namespaces": 32, 00:12:22.026 "min_cntlid": 1, 00:12:22.026 "max_cntlid": 65519, 00:12:22.026 "namespaces": [ 00:12:22.026 { 00:12:22.026 "nsid": 1, 00:12:22.026 "bdev_name": "Null3", 00:12:22.026 "name": "Null3", 00:12:22.026 "nguid": "CF96FE939007440A9024C70DEFFCDF30", 00:12:22.026 "uuid": "cf96fe93-9007-440a-9024-c70deffcdf30" 00:12:22.026 } 00:12:22.026 ] 00:12:22.026 }, 00:12:22.026 { 00:12:22.026 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:22.026 "subtype": "NVMe", 00:12:22.026 "listen_addresses": [ 00:12:22.026 { 00:12:22.026 "trtype": "TCP", 00:12:22.026 "adrfam": "IPv4", 00:12:22.026 "traddr": "10.0.0.2", 00:12:22.026 "trsvcid": "4420" 00:12:22.026 } 00:12:22.026 ], 00:12:22.026 "allow_any_host": true, 00:12:22.026 "hosts": [], 00:12:22.026 "serial_number": "SPDK00000000000004", 00:12:22.026 "model_number": "SPDK bdev Controller", 00:12:22.026 "max_namespaces": 32, 00:12:22.026 "min_cntlid": 1, 00:12:22.026 "max_cntlid": 65519, 00:12:22.026 "namespaces": [ 00:12:22.026 { 00:12:22.026 "nsid": 1, 00:12:22.026 "bdev_name": "Null4", 00:12:22.026 "name": "Null4", 00:12:22.026 "nguid": "328D489EB9B14D2C8C0920A02EEDE143", 00:12:22.026 "uuid": "328d489e-b9b1-4d2c-8c09-20a02eede143" 00:12:22.026 } 00:12:22.026 ] 00:12:22.026 } 00:12:22.026 ] 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:22.026 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.027 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.027 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.027 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:22.027 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.027 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.027 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.027 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:22.027 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:22.027 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.027 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:22.285 rmmod nvme_tcp 00:12:22.285 rmmod nvme_fabrics 00:12:22.285 rmmod nvme_keyring 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 848903 ']' 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 848903 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 848903 ']' 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 848903 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 848903 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 848903' 00:12:22.285 killing process with pid 848903 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 848903 00:12:22.285 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 848903 00:12:22.545 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:22.545 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:22.545 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:22.545 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:22.545 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:22.545 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.545 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.545 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.082 19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:25.082 00:12:25.082 real 0m6.493s 00:12:25.082 user 0m6.969s 00:12:25.082 sys 0m2.212s 00:12:25.082 19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:25.082 19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.082 ************************************ 00:12:25.082 END TEST nvmf_target_discovery 00:12:25.082 ************************************ 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:25.082 ************************************ 00:12:25.082 START TEST nvmf_referrals 00:12:25.082 ************************************ 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:25.082 * Looking for test storage... 00:12:25.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.082 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:12:25.083 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:27.614 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:27.614 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:27.614 Found net devices under 0000:09:00.0: cvl_0_0 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:27.614 Found net devices under 0000:09:00.1: cvl_0_1 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:27.614 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:27.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:12:27.615 00:12:27.615 --- 10.0.0.2 ping statistics --- 00:12:27.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.615 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:27.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:12:27.615 00:12:27.615 --- 10.0.0.1 ping statistics --- 00:12:27.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.615 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=851288 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 851288 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 851288 ']' 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:27.615 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.615 [2024-07-25 19:03:19.817040] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:27.615 [2024-07-25 19:03:19.817131] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.615 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.615 [2024-07-25 19:03:19.892916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:27.615 [2024-07-25 19:03:20.004577] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.615 [2024-07-25 19:03:20.004627] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.615 [2024-07-25 19:03:20.004657] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.615 [2024-07-25 19:03:20.004669] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.615 [2024-07-25 19:03:20.004680] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.615 [2024-07-25 19:03:20.004759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.615 [2024-07-25 19:03:20.004808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.615 [2024-07-25 19:03:20.004856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.615 [2024-07-25 19:03:20.004859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.873 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:27.873 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:27.873 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:27.873 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:27.873 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.873 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.873 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:27.873 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.873 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.873 [2024-07-25 19:03:20.169704] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.873 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.873 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.874 [2024-07-25 19:03:20.181906] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:27.874 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:28.132 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:28.133 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:28.133 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.133 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.133 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.133 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:28.133 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.133 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.133 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.133 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:28.133 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:28.133 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:28.133 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:28.133 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.133 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.133 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:28.133 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.390 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:28.390 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:28.390 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:28.390 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:28.390 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:28.390 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.390 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:28.390 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:28.390 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:28.390 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:28.390 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:28.390 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:28.390 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:28.390 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.390 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:28.648 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:28.648 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:28.648 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:28.648 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:28.648 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.648 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:28.648 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:28.648 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:28.648 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.648 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.648 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.648 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:28.648 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:28.648 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:28.648 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:28.648 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.648 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.648 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:28.648 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.648 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:28.648 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:28.648 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:28.648 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:28.648 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:28.648 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.648 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:28.648 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:28.906 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:28.906 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:28.906 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:28.906 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:28.906 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:28.906 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.906 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:28.906 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:28.906 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:28.906 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:28.906 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:28.906 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.906 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:29.164 rmmod nvme_tcp 00:12:29.164 rmmod nvme_fabrics 00:12:29.164 rmmod nvme_keyring 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 851288 ']' 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 851288 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 851288 ']' 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 851288 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 851288 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 851288' 00:12:29.164 killing process with pid 851288 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 851288 00:12:29.164 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 851288 00:12:29.424 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:29.424 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:29.424 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:29.424 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:29.424 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:29.424 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.424 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.424 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.958 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:31.958 00:12:31.958 real 0m6.886s 00:12:31.958 user 0m8.760s 00:12:31.958 sys 0m2.511s 00:12:31.958 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:31.958 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.958 ************************************ 00:12:31.958 END TEST nvmf_referrals 00:12:31.958 ************************************ 00:12:31.958 19:03:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:31.958 19:03:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:31.958 19:03:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:31.958 19:03:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:31.958 ************************************ 00:12:31.958 START TEST nvmf_connect_disconnect 00:12:31.958 ************************************ 00:12:31.958 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:31.958 * Looking for test storage... 00:12:31.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:12:31.958 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:34.490 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:34.490 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:34.490 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:34.491 Found net devices under 0000:09:00.0: cvl_0_0 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:34.491 Found net devices under 0000:09:00.1: cvl_0_1 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:34.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:12:34.491 00:12:34.491 --- 10.0.0.2 ping statistics --- 00:12:34.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.491 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:34.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:12:34.491 00:12:34.491 --- 10.0.0.1 ping statistics --- 00:12:34.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.491 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=853870 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 853870 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 853870 ']' 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:34.491 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.491 [2024-07-25 19:03:26.627115] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:34.491 [2024-07-25 19:03:26.627200] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.491 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.491 [2024-07-25 19:03:26.704953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:34.491 [2024-07-25 19:03:26.827118] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.491 [2024-07-25 19:03:26.827177] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.491 [2024-07-25 19:03:26.827194] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.491 [2024-07-25 19:03:26.827207] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.491 [2024-07-25 19:03:26.827218] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.491 [2024-07-25 19:03:26.827312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.491 [2024-07-25 19:03:26.827381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.491 [2024-07-25 19:03:26.827488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.491 [2024-07-25 19:03:26.827491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.424 [2024-07-25 19:03:27.655646] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.424 [2024-07-25 19:03:27.713660] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:35.424 19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:37.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.101 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:49.101 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:49.101 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:49.101 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:49.101 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:49.101 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:49.101 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:49.101 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:49.101 rmmod nvme_tcp 00:12:49.101 rmmod nvme_fabrics 00:12:49.101 rmmod nvme_keyring 00:12:49.101 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:49.101 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:49.101 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:49.101 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 853870 ']' 00:12:49.101 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 853870 00:12:49.101 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 853870 ']' 00:12:49.101 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 853870 00:12:49.101 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:49.101 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:49.101 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 853870 00:12:49.102 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:49.102 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:49.102 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 853870' 00:12:49.102 killing process with pid 853870 00:12:49.102 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 853870 00:12:49.102 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 853870 00:12:49.360 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:49.360 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:49.360 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:49.360 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:49.360 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:49.360 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.360 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.360 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.897 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:51.897 00:12:51.897 real 0m19.766s 00:12:51.897 user 0m59.148s 00:12:51.897 sys 0m3.600s 00:12:51.897 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:51.897 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:51.897 ************************************ 00:12:51.897 END TEST nvmf_connect_disconnect 00:12:51.897 ************************************ 00:12:51.897 19:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:51.897 19:03:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:51.897 19:03:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:51.897 19:03:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:51.897 ************************************ 00:12:51.897 START TEST nvmf_multitarget 00:12:51.897 ************************************ 00:12:51.897 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:51.897 * Looking for test storage... 00:12:51.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:51.897 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:51.897 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:51.897 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.897 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.897 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.897 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.897 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.897 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.897 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.897 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.897 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:51.898 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:54.431 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.431 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:54.431 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:54.431 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:54.431 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:54.431 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:54.431 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:54.431 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:54.431 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:54.431 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:54.432 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:54.432 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:54.432 Found net devices under 0000:09:00.0: cvl_0_0 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:54.432 Found net devices under 0000:09:00.1: cvl_0_1 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:54.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:12:54.432 00:12:54.432 --- 10.0.0.2 ping statistics --- 00:12:54.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.432 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:12:54.432 00:12:54.432 --- 10.0.0.1 ping statistics --- 00:12:54.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.432 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:54.432 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=858049 00:12:54.433 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:54.433 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 858049 00:12:54.433 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 858049 ']' 00:12:54.433 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.433 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:54.433 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.433 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:54.433 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:54.433 [2024-07-25 19:03:46.637296] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:54.433 [2024-07-25 19:03:46.637372] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.433 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.433 [2024-07-25 19:03:46.714281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:54.433 [2024-07-25 19:03:46.828687] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.433 [2024-07-25 19:03:46.828735] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.433 [2024-07-25 19:03:46.828764] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.433 [2024-07-25 19:03:46.828776] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.433 [2024-07-25 19:03:46.828786] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.433 [2024-07-25 19:03:46.828896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.433 [2024-07-25 19:03:46.828955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.433 [2024-07-25 19:03:46.828986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.433 [2024-07-25 19:03:46.828989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.365 19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:55.365 19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:12:55.365 19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:55.365 19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:55.365 19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:55.365 19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.365 19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:55.365 19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:55.365 19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:55.365 19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:55.366 19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:55.623 "nvmf_tgt_1" 00:12:55.623 19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:55.623 "nvmf_tgt_2" 00:12:55.623 19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:55.623 19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:55.880 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:55.880 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:55.880 true 00:12:55.880 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:55.880 true 00:12:55.880 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:55.880 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:56.138 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:56.138 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:56.138 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:56.138 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:56.138 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:56.138 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:56.138 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:56.138 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:56.138 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:56.138 rmmod nvme_tcp 00:12:56.138 rmmod nvme_fabrics 00:12:56.138 rmmod nvme_keyring 00:12:56.138 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:56.138 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:56.138 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:56.138 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 858049 ']' 00:12:56.138 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 858049 00:12:56.138 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 858049 ']' 00:12:56.138 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 858049 00:12:56.138 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:12:56.138 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:56.138 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 858049 00:12:56.138 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:56.138 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:56.139 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 858049' 00:12:56.139 killing process with pid 858049 00:12:56.139 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 858049 00:12:56.139 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 858049 00:12:56.397 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:56.397 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:56.397 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:56.397 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:56.397 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:56.397 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.397 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.397 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:58.967 00:12:58.967 real 0m7.066s 00:12:58.967 user 0m9.676s 00:12:58.967 sys 0m2.398s 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:58.967 ************************************ 00:12:58.967 END TEST nvmf_multitarget 00:12:58.967 ************************************ 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:58.967 ************************************ 00:12:58.967 START TEST nvmf_rpc 00:12:58.967 ************************************ 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:58.967 * Looking for test storage... 00:12:58.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.967 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:58.968 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:01.528 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:01.528 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:01.528 Found net devices under 0000:09:00.0: cvl_0_0 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.528 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:01.529 Found net devices under 0000:09:00.1: cvl_0_1 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:01.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:01.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:13:01.529 00:13:01.529 --- 10.0.0.2 ping statistics --- 00:13:01.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.529 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:01.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:01.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:13:01.529 00:13:01.529 --- 10.0.0.1 ping statistics --- 00:13:01.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.529 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=860572 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 860572 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 860572 ']' 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:01.529 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.529 [2024-07-25 19:03:53.785374] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:01.529 [2024-07-25 19:03:53.785461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.529 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.529 [2024-07-25 19:03:53.866990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:01.529 [2024-07-25 19:03:53.981775] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:01.529 [2024-07-25 19:03:53.981825] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:01.529 [2024-07-25 19:03:53.981853] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:01.529 [2024-07-25 19:03:53.981865] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:01.529 [2024-07-25 19:03:53.981875] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:01.529 [2024-07-25 19:03:53.981955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.529 [2024-07-25 19:03:53.982021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:01.529 [2024-07-25 19:03:53.985121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:01.529 [2024-07-25 19:03:53.985132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:02.463 "tick_rate": 2700000000, 00:13:02.463 "poll_groups": [ 00:13:02.463 { 00:13:02.463 "name": "nvmf_tgt_poll_group_000", 00:13:02.463 "admin_qpairs": 0, 00:13:02.463 "io_qpairs": 0, 00:13:02.463 "current_admin_qpairs": 0, 00:13:02.463 "current_io_qpairs": 0, 00:13:02.463 "pending_bdev_io": 0, 00:13:02.463 "completed_nvme_io": 0, 00:13:02.463 "transports": [] 00:13:02.463 }, 00:13:02.463 { 00:13:02.463 "name": "nvmf_tgt_poll_group_001", 00:13:02.463 "admin_qpairs": 0, 00:13:02.463 "io_qpairs": 0, 00:13:02.463 "current_admin_qpairs": 0, 00:13:02.463 "current_io_qpairs": 0, 00:13:02.463 "pending_bdev_io": 0, 00:13:02.463 "completed_nvme_io": 0, 00:13:02.463 "transports": [] 00:13:02.463 }, 00:13:02.463 { 00:13:02.463 "name": "nvmf_tgt_poll_group_002", 00:13:02.463 "admin_qpairs": 0, 00:13:02.463 "io_qpairs": 0, 00:13:02.463 "current_admin_qpairs": 0, 00:13:02.463 "current_io_qpairs": 0, 00:13:02.463 "pending_bdev_io": 0, 00:13:02.463 "completed_nvme_io": 0, 00:13:02.463 "transports": [] 00:13:02.463 }, 00:13:02.463 { 00:13:02.463 "name": "nvmf_tgt_poll_group_003", 00:13:02.463 "admin_qpairs": 0, 00:13:02.463 "io_qpairs": 0, 00:13:02.463 "current_admin_qpairs": 0, 00:13:02.463 "current_io_qpairs": 0, 00:13:02.463 "pending_bdev_io": 0, 00:13:02.463 "completed_nvme_io": 0, 00:13:02.463 "transports": [] 00:13:02.463 } 00:13:02.463 ] 00:13:02.463 }' 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.463 [2024-07-25 19:03:54.874006] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:02.463 "tick_rate": 2700000000, 00:13:02.463 "poll_groups": [ 00:13:02.463 { 00:13:02.463 "name": "nvmf_tgt_poll_group_000", 00:13:02.463 "admin_qpairs": 0, 00:13:02.463 "io_qpairs": 0, 00:13:02.463 "current_admin_qpairs": 0, 00:13:02.463 "current_io_qpairs": 0, 00:13:02.463 "pending_bdev_io": 0, 00:13:02.463 "completed_nvme_io": 0, 00:13:02.463 "transports": [ 00:13:02.463 { 00:13:02.463 "trtype": "TCP" 00:13:02.463 } 00:13:02.463 ] 00:13:02.463 }, 00:13:02.463 { 00:13:02.463 "name": "nvmf_tgt_poll_group_001", 00:13:02.463 "admin_qpairs": 0, 00:13:02.463 "io_qpairs": 0, 00:13:02.463 "current_admin_qpairs": 0, 00:13:02.463 "current_io_qpairs": 0, 00:13:02.463 "pending_bdev_io": 0, 00:13:02.463 "completed_nvme_io": 0, 00:13:02.463 "transports": [ 00:13:02.463 { 00:13:02.463 "trtype": "TCP" 00:13:02.463 } 00:13:02.463 ] 00:13:02.463 }, 00:13:02.463 { 00:13:02.463 "name": "nvmf_tgt_poll_group_002", 00:13:02.463 "admin_qpairs": 0, 00:13:02.463 "io_qpairs": 0, 00:13:02.463 "current_admin_qpairs": 0, 00:13:02.463 "current_io_qpairs": 0, 00:13:02.463 "pending_bdev_io": 0, 00:13:02.463 "completed_nvme_io": 0, 00:13:02.463 "transports": [ 00:13:02.463 { 00:13:02.463 "trtype": "TCP" 00:13:02.463 } 00:13:02.463 ] 00:13:02.463 }, 00:13:02.463 { 00:13:02.463 "name": "nvmf_tgt_poll_group_003", 00:13:02.463 "admin_qpairs": 0, 00:13:02.463 "io_qpairs": 0, 00:13:02.463 "current_admin_qpairs": 0, 00:13:02.463 "current_io_qpairs": 0, 00:13:02.463 "pending_bdev_io": 0, 00:13:02.463 "completed_nvme_io": 0, 00:13:02.463 "transports": [ 00:13:02.463 { 00:13:02.463 "trtype": "TCP" 00:13:02.463 } 00:13:02.463 ] 00:13:02.463 } 00:13:02.463 ] 00:13:02.463 }' 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:02.463 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:02.722 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:02.722 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:02.722 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:02.722 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:02.722 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:02.722 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:02.722 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:02.722 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:02.722 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:02.722 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:02.722 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.722 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.722 Malloc1 00:13:02.722 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.722 [2024-07-25 19:03:55.028577] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:13:02.722 [2024-07-25 19:03:55.051179] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:13:02.722 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:02.722 could not add new controller: failed to write to nvme-fabrics device 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.722 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.655 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:03.655 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:03.655 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.655 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:03.655 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:05.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.555 [2024-07-25 19:03:57.926003] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:13:05.555 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:05.555 could not add new controller: failed to write to nvme-fabrics device 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.555 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.489 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.489 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:06.489 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.489 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:06.489 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:08.389 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:08.389 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:08.389 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.389 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:08.389 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.389 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:08.389 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.389 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:08.389 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:08.389 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:08.389 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.389 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:08.389 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.389 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:08.389 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.389 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.389 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.390 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.390 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:08.390 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:08.390 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.390 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.390 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.390 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.390 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.390 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.390 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.390 [2024-07-25 19:04:00.747349] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.390 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.390 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:08.390 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.390 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.390 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.390 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.390 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.390 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.390 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.390 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:09.322 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:09.322 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:09.322 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:09.322 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:09.322 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:11.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.220 [2024-07-25 19:04:03.601268] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.220 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:11.787 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:11.787 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:11.787 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:11.787 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:11.787 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:14.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.315 [2024-07-25 19:04:06.371992] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.315 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:14.573 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:14.573 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:14.573 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:14.573 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:14.573 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:17.102 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:17.102 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:17.102 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:17.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.102 [2024-07-25 19:04:09.148838] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.102 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:17.360 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:17.360 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:17.360 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.360 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:17.360 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:19.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.922 [2024-07-25 19:04:11.919605] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.922 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:20.180 19:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:20.180 19:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:20.180 19:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:20.180 19:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:20.180 19:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:22.075 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:22.075 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:22.075 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:22.332 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:22.332 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:22.332 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:22.332 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:22.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.332 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:22.332 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:22.332 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:22.332 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.333 [2024-07-25 19:04:14.700066] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.333 [2024-07-25 19:04:14.748190] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.333 [2024-07-25 19:04:14.796342] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.333 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.590 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.590 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.590 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.590 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.590 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.590 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.590 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.590 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.590 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.590 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.590 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.590 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.590 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.590 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:22.590 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.590 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.590 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.590 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.590 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.590 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.590 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.590 [2024-07-25 19:04:14.844521] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.590 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.591 [2024-07-25 19:04:14.892670] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:22.591 "tick_rate": 2700000000, 00:13:22.591 "poll_groups": [ 00:13:22.591 { 00:13:22.591 "name": "nvmf_tgt_poll_group_000", 00:13:22.591 "admin_qpairs": 2, 00:13:22.591 "io_qpairs": 84, 00:13:22.591 "current_admin_qpairs": 0, 00:13:22.591 "current_io_qpairs": 0, 00:13:22.591 "pending_bdev_io": 0, 00:13:22.591 "completed_nvme_io": 184, 00:13:22.591 "transports": [ 00:13:22.591 { 00:13:22.591 "trtype": "TCP" 00:13:22.591 } 00:13:22.591 ] 00:13:22.591 }, 00:13:22.591 { 00:13:22.591 "name": "nvmf_tgt_poll_group_001", 00:13:22.591 "admin_qpairs": 2, 00:13:22.591 "io_qpairs": 84, 00:13:22.591 "current_admin_qpairs": 0, 00:13:22.591 "current_io_qpairs": 0, 00:13:22.591 "pending_bdev_io": 0, 00:13:22.591 "completed_nvme_io": 135, 00:13:22.591 "transports": [ 00:13:22.591 { 00:13:22.591 "trtype": "TCP" 00:13:22.591 } 00:13:22.591 ] 00:13:22.591 }, 00:13:22.591 { 00:13:22.591 "name": "nvmf_tgt_poll_group_002", 00:13:22.591 "admin_qpairs": 1, 00:13:22.591 "io_qpairs": 84, 00:13:22.591 "current_admin_qpairs": 0, 00:13:22.591 "current_io_qpairs": 0, 00:13:22.591 "pending_bdev_io": 0, 00:13:22.591 "completed_nvme_io": 203, 00:13:22.591 "transports": [ 00:13:22.591 { 00:13:22.591 "trtype": "TCP" 00:13:22.591 } 00:13:22.591 ] 00:13:22.591 }, 00:13:22.591 { 00:13:22.591 "name": "nvmf_tgt_poll_group_003", 00:13:22.591 "admin_qpairs": 2, 00:13:22.591 "io_qpairs": 84, 00:13:22.591 "current_admin_qpairs": 0, 00:13:22.591 "current_io_qpairs": 0, 00:13:22.591 "pending_bdev_io": 0, 00:13:22.591 "completed_nvme_io": 164, 00:13:22.591 "transports": [ 00:13:22.591 { 00:13:22.591 "trtype": "TCP" 00:13:22.591 } 00:13:22.591 ] 00:13:22.591 } 00:13:22.591 ] 00:13:22.591 }' 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:22.591 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:22.591 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:22.591 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:22.591 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:22.591 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:22.591 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:22.591 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:22.591 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:22.591 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:22.591 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:22.591 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:22.591 rmmod nvme_tcp 00:13:22.591 rmmod nvme_fabrics 00:13:22.591 rmmod nvme_keyring 00:13:22.591 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:22.849 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:22.849 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:22.849 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 860572 ']' 00:13:22.849 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 860572 00:13:22.849 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 860572 ']' 00:13:22.849 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 860572 00:13:22.849 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:13:22.849 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:22.849 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 860572 00:13:22.849 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:22.849 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:22.849 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 860572' 00:13:22.849 killing process with pid 860572 00:13:22.849 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 860572 00:13:22.849 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 860572 00:13:23.107 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:23.107 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:23.107 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:23.107 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:23.107 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:23.107 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.107 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:23.107 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.014 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:25.014 00:13:25.014 real 0m26.533s 00:13:25.014 user 1m24.912s 00:13:25.014 sys 0m4.666s 00:13:25.014 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:25.014 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.014 ************************************ 00:13:25.014 END TEST nvmf_rpc 00:13:25.014 ************************************ 00:13:25.014 19:04:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:25.014 19:04:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:25.014 19:04:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:25.014 19:04:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:25.273 ************************************ 00:13:25.273 START TEST nvmf_invalid 00:13:25.273 ************************************ 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:25.273 * Looking for test storage... 00:13:25.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:25.273 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:25.274 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:25.274 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:25.274 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:25.274 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:25.274 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:25.274 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:25.274 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.274 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:25.274 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:25.274 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:25.274 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.274 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:25.274 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.274 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:25.274 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:25.274 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:25.274 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:27.805 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:27.805 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:27.805 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:27.805 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:27.805 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:27.805 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:27.805 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:27.805 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:27.805 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:27.805 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:27.805 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:27.806 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:27.806 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:27.806 Found net devices under 0000:09:00.0: cvl_0_0 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:27.806 Found net devices under 0000:09:00.1: cvl_0_1 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:27.806 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:27.806 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:27.806 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:27.806 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:27.806 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:27.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:13:27.806 00:13:27.806 --- 10.0.0.2 ping statistics --- 00:13:27.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.806 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:13:27.806 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:27.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:13:27.806 00:13:27.806 --- 10.0.0.1 ping statistics --- 00:13:27.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.806 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:13:27.806 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.806 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:27.806 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:27.806 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.806 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:27.806 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:27.806 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.806 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:27.806 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:27.806 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:27.806 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:27.806 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:27.806 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:27.806 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=865480 00:13:27.806 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:27.806 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 865480 00:13:27.806 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 865480 ']' 00:13:27.807 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.807 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:27.807 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.807 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:27.807 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:27.807 [2024-07-25 19:04:20.147141] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:27.807 [2024-07-25 19:04:20.147233] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.807 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.807 [2024-07-25 19:04:20.226303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:28.065 [2024-07-25 19:04:20.350188] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.065 [2024-07-25 19:04:20.350248] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.065 [2024-07-25 19:04:20.350264] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.065 [2024-07-25 19:04:20.350277] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.065 [2024-07-25 19:04:20.350289] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.065 [2024-07-25 19:04:20.350373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.065 [2024-07-25 19:04:20.350445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.065 [2024-07-25 19:04:20.350505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.065 [2024-07-25 19:04:20.350509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.998 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:28.998 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:28.998 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:28.998 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:28.998 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:28.998 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.998 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:28.998 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode12317 00:13:28.998 [2024-07-25 19:04:21.356445] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:28.998 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:28.998 { 00:13:28.998 "nqn": "nqn.2016-06.io.spdk:cnode12317", 00:13:28.998 "tgt_name": "foobar", 00:13:28.998 "method": "nvmf_create_subsystem", 00:13:28.998 "req_id": 1 00:13:28.998 } 00:13:28.998 Got JSON-RPC error response 00:13:28.998 response: 00:13:28.998 { 00:13:28.998 "code": -32603, 00:13:28.998 "message": "Unable to find target foobar" 00:13:28.998 }' 00:13:28.998 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:28.998 { 00:13:28.998 "nqn": "nqn.2016-06.io.spdk:cnode12317", 00:13:28.998 "tgt_name": "foobar", 00:13:28.998 "method": "nvmf_create_subsystem", 00:13:28.998 "req_id": 1 00:13:28.998 } 00:13:28.998 Got JSON-RPC error response 00:13:28.998 response: 00:13:28.998 { 00:13:28.998 "code": -32603, 00:13:28.998 "message": "Unable to find target foobar" 00:13:28.998 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:28.998 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:28.998 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode9870 00:13:29.256 [2024-07-25 19:04:21.649396] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9870: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:29.256 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:29.256 { 00:13:29.256 "nqn": "nqn.2016-06.io.spdk:cnode9870", 00:13:29.256 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:29.256 "method": "nvmf_create_subsystem", 00:13:29.256 "req_id": 1 00:13:29.256 } 00:13:29.256 Got JSON-RPC error response 00:13:29.256 response: 00:13:29.256 { 00:13:29.256 "code": -32602, 00:13:29.256 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:29.256 }' 00:13:29.256 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:29.256 { 00:13:29.256 "nqn": "nqn.2016-06.io.spdk:cnode9870", 00:13:29.256 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:29.256 "method": "nvmf_create_subsystem", 00:13:29.256 "req_id": 1 00:13:29.256 } 00:13:29.256 Got JSON-RPC error response 00:13:29.256 response: 00:13:29.256 { 00:13:29.256 "code": -32602, 00:13:29.256 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:29.256 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:29.256 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:29.256 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode17178 00:13:29.515 [2024-07-25 19:04:21.910231] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17178: invalid model number 'SPDK_Controller' 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:29.515 { 00:13:29.515 "nqn": "nqn.2016-06.io.spdk:cnode17178", 00:13:29.515 "model_number": "SPDK_Controller\u001f", 00:13:29.515 "method": "nvmf_create_subsystem", 00:13:29.515 "req_id": 1 00:13:29.515 } 00:13:29.515 Got JSON-RPC error response 00:13:29.515 response: 00:13:29.515 { 00:13:29.515 "code": -32602, 00:13:29.515 "message": "Invalid MN SPDK_Controller\u001f" 00:13:29.515 }' 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:29.515 { 00:13:29.515 "nqn": "nqn.2016-06.io.spdk:cnode17178", 00:13:29.515 "model_number": "SPDK_Controller\u001f", 00:13:29.515 "method": "nvmf_create_subsystem", 00:13:29.515 "req_id": 1 00:13:29.515 } 00:13:29.515 Got JSON-RPC error response 00:13:29.515 response: 00:13:29.515 { 00:13:29.515 "code": -32602, 00:13:29.515 "message": "Invalid MN SPDK_Controller\u001f" 00:13:29.515 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:29.515 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.516 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.775 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:29.775 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:29.775 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:29.775 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.775 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.775 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:29.775 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:29.775 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:29.775 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.775 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.775 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ x == \- ]] 00:13:29.775 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'xv\7gb])vj'\''\O[?h\8I=b' 00:13:29.775 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'xv\7gb])vj'\''\O[?h\8I=b' nqn.2016-06.io.spdk:cnode3841 00:13:29.775 [2024-07-25 19:04:22.219237] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3841: invalid serial number 'xv\7gb])vj'\O[?h\8I=b' 00:13:29.775 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:29.775 { 00:13:29.775 "nqn": "nqn.2016-06.io.spdk:cnode3841", 00:13:29.775 "serial_number": "xv\\7gb])vj'\''\\O[?h\\8I=b", 00:13:29.775 "method": "nvmf_create_subsystem", 00:13:29.775 "req_id": 1 00:13:29.775 } 00:13:29.775 Got JSON-RPC error response 00:13:29.775 response: 00:13:29.775 { 00:13:29.775 "code": -32602, 00:13:29.775 "message": "Invalid SN xv\\7gb])vj'\''\\O[?h\\8I=b" 00:13:29.775 }' 00:13:29.775 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:29.775 { 00:13:29.775 "nqn": "nqn.2016-06.io.spdk:cnode3841", 00:13:29.775 "serial_number": "xv\\7gb])vj'\\O[?h\\8I=b", 00:13:29.775 "method": "nvmf_create_subsystem", 00:13:29.775 "req_id": 1 00:13:29.775 } 00:13:29.775 Got JSON-RPC error response 00:13:29.775 response: 00:13:29.775 { 00:13:29.775 "code": -32602, 00:13:29.775 "message": "Invalid SN xv\\7gb])vj'\\O[?h\\8I=b" 00:13:29.775 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:29.775 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:29.775 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:29.775 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:29.775 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:29.775 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:29.775 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:29.775 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.775 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:30.034 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:30.034 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:30.034 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.034 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.034 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:30.034 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:30.034 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:30.034 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.034 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.034 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:30.034 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:30.034 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:30.034 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.034 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.034 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:30.034 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:30.034 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:30.034 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.034 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.034 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:30.034 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:30.034 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:30.034 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.034 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.034 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:30.034 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:30.035 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ W == \- ]] 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'W5@5">^N-Sa_@z'\''Gu}}Ux~?eB58H=ufYb.<#[a=k>' 00:13:30.036 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'W5@5">^N-Sa_@z'\''Gu}}Ux~?eB58H=ufYb.<#[a=k>' nqn.2016-06.io.spdk:cnode10318 00:13:30.294 [2024-07-25 19:04:22.612621] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10318: invalid model number 'W5@5">^N-Sa_@z'Gu}}Ux~?eB58H=ufYb.<#[a=k>' 00:13:30.294 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:30.294 { 00:13:30.294 "nqn": "nqn.2016-06.io.spdk:cnode10318", 00:13:30.294 "model_number": "W5@5\">^N-Sa_@z'\''Gu}}Ux~?eB58H=ufYb.<#[a=k>", 00:13:30.294 "method": "nvmf_create_subsystem", 00:13:30.294 "req_id": 1 00:13:30.294 } 00:13:30.294 Got JSON-RPC error response 00:13:30.294 response: 00:13:30.294 { 00:13:30.294 "code": -32602, 00:13:30.294 "message": "Invalid MN W5@5\">^N-Sa_@z'\''Gu}}Ux~?eB58H=ufYb.<#[a=k>" 00:13:30.294 }' 00:13:30.294 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:30.294 { 00:13:30.294 "nqn": "nqn.2016-06.io.spdk:cnode10318", 00:13:30.294 "model_number": "W5@5\">^N-Sa_@z'Gu}}Ux~?eB58H=ufYb.<#[a=k>", 00:13:30.294 "method": "nvmf_create_subsystem", 00:13:30.294 "req_id": 1 00:13:30.294 } 00:13:30.294 Got JSON-RPC error response 00:13:30.294 response: 00:13:30.294 { 00:13:30.294 "code": -32602, 00:13:30.294 "message": "Invalid MN W5@5\">^N-Sa_@z'Gu}}Ux~?eB58H=ufYb.<#[a=k>" 00:13:30.294 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:30.294 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:30.552 [2024-07-25 19:04:22.869580] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.552 19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:30.810 19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:30.810 19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:30.810 19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:30.810 19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:30.810 19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:31.068 [2024-07-25 19:04:23.383219] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:31.068 19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:31.068 { 00:13:31.068 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:31.068 "listen_address": { 00:13:31.068 "trtype": "tcp", 00:13:31.068 "traddr": "", 00:13:31.068 "trsvcid": "4421" 00:13:31.068 }, 00:13:31.068 "method": "nvmf_subsystem_remove_listener", 00:13:31.068 "req_id": 1 00:13:31.068 } 00:13:31.068 Got JSON-RPC error response 00:13:31.068 response: 00:13:31.068 { 00:13:31.068 "code": -32602, 00:13:31.068 "message": "Invalid parameters" 00:13:31.068 }' 00:13:31.068 19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:31.068 { 00:13:31.068 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:31.068 "listen_address": { 00:13:31.068 "trtype": "tcp", 00:13:31.068 "traddr": "", 00:13:31.068 "trsvcid": "4421" 00:13:31.068 }, 00:13:31.068 "method": "nvmf_subsystem_remove_listener", 00:13:31.068 "req_id": 1 00:13:31.068 } 00:13:31.068 Got JSON-RPC error response 00:13:31.068 response: 00:13:31.068 { 00:13:31.068 "code": -32602, 00:13:31.068 "message": "Invalid parameters" 00:13:31.068 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:31.068 19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27191 -i 0 00:13:31.329 [2024-07-25 19:04:23.623954] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27191: invalid cntlid range [0-65519] 00:13:31.329 19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:31.329 { 00:13:31.329 "nqn": "nqn.2016-06.io.spdk:cnode27191", 00:13:31.329 "min_cntlid": 0, 00:13:31.329 "method": "nvmf_create_subsystem", 00:13:31.329 "req_id": 1 00:13:31.329 } 00:13:31.329 Got JSON-RPC error response 00:13:31.329 response: 00:13:31.329 { 00:13:31.329 "code": -32602, 00:13:31.329 "message": "Invalid cntlid range [0-65519]" 00:13:31.329 }' 00:13:31.329 19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:31.329 { 00:13:31.329 "nqn": "nqn.2016-06.io.spdk:cnode27191", 00:13:31.329 "min_cntlid": 0, 00:13:31.329 "method": "nvmf_create_subsystem", 00:13:31.329 "req_id": 1 00:13:31.329 } 00:13:31.329 Got JSON-RPC error response 00:13:31.329 response: 00:13:31.329 { 00:13:31.329 "code": -32602, 00:13:31.329 "message": "Invalid cntlid range [0-65519]" 00:13:31.329 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:31.329 19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28380 -i 65520 00:13:31.587 [2024-07-25 19:04:23.872822] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28380: invalid cntlid range [65520-65519] 00:13:31.587 19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:31.587 { 00:13:31.587 "nqn": "nqn.2016-06.io.spdk:cnode28380", 00:13:31.587 "min_cntlid": 65520, 00:13:31.587 "method": "nvmf_create_subsystem", 00:13:31.587 "req_id": 1 00:13:31.587 } 00:13:31.587 Got JSON-RPC error response 00:13:31.587 response: 00:13:31.587 { 00:13:31.587 "code": -32602, 00:13:31.587 "message": "Invalid cntlid range [65520-65519]" 00:13:31.587 }' 00:13:31.587 19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:31.587 { 00:13:31.587 "nqn": "nqn.2016-06.io.spdk:cnode28380", 00:13:31.587 "min_cntlid": 65520, 00:13:31.587 "method": "nvmf_create_subsystem", 00:13:31.587 "req_id": 1 00:13:31.587 } 00:13:31.587 Got JSON-RPC error response 00:13:31.587 response: 00:13:31.587 { 00:13:31.587 "code": -32602, 00:13:31.587 "message": "Invalid cntlid range [65520-65519]" 00:13:31.587 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:31.587 19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28347 -I 0 00:13:31.844 [2024-07-25 19:04:24.125639] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28347: invalid cntlid range [1-0] 00:13:31.844 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:31.844 { 00:13:31.844 "nqn": "nqn.2016-06.io.spdk:cnode28347", 00:13:31.844 "max_cntlid": 0, 00:13:31.844 "method": "nvmf_create_subsystem", 00:13:31.844 "req_id": 1 00:13:31.844 } 00:13:31.844 Got JSON-RPC error response 00:13:31.845 response: 00:13:31.845 { 00:13:31.845 "code": -32602, 00:13:31.845 "message": "Invalid cntlid range [1-0]" 00:13:31.845 }' 00:13:31.845 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:31.845 { 00:13:31.845 "nqn": "nqn.2016-06.io.spdk:cnode28347", 00:13:31.845 "max_cntlid": 0, 00:13:31.845 "method": "nvmf_create_subsystem", 00:13:31.845 "req_id": 1 00:13:31.845 } 00:13:31.845 Got JSON-RPC error response 00:13:31.845 response: 00:13:31.845 { 00:13:31.845 "code": -32602, 00:13:31.845 "message": "Invalid cntlid range [1-0]" 00:13:31.845 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:31.845 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12192 -I 65520 00:13:32.102 [2024-07-25 19:04:24.370456] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12192: invalid cntlid range [1-65520] 00:13:32.102 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:32.102 { 00:13:32.102 "nqn": "nqn.2016-06.io.spdk:cnode12192", 00:13:32.102 "max_cntlid": 65520, 00:13:32.102 "method": "nvmf_create_subsystem", 00:13:32.102 "req_id": 1 00:13:32.102 } 00:13:32.102 Got JSON-RPC error response 00:13:32.102 response: 00:13:32.102 { 00:13:32.102 "code": -32602, 00:13:32.102 "message": "Invalid cntlid range [1-65520]" 00:13:32.102 }' 00:13:32.102 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:32.102 { 00:13:32.102 "nqn": "nqn.2016-06.io.spdk:cnode12192", 00:13:32.102 "max_cntlid": 65520, 00:13:32.102 "method": "nvmf_create_subsystem", 00:13:32.102 "req_id": 1 00:13:32.102 } 00:13:32.102 Got JSON-RPC error response 00:13:32.102 response: 00:13:32.102 { 00:13:32.102 "code": -32602, 00:13:32.102 "message": "Invalid cntlid range [1-65520]" 00:13:32.102 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.102 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31103 -i 6 -I 5 00:13:32.360 [2024-07-25 19:04:24.615321] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31103: invalid cntlid range [6-5] 00:13:32.360 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:32.360 { 00:13:32.360 "nqn": "nqn.2016-06.io.spdk:cnode31103", 00:13:32.360 "min_cntlid": 6, 00:13:32.360 "max_cntlid": 5, 00:13:32.360 "method": "nvmf_create_subsystem", 00:13:32.360 "req_id": 1 00:13:32.360 } 00:13:32.360 Got JSON-RPC error response 00:13:32.360 response: 00:13:32.360 { 00:13:32.360 "code": -32602, 00:13:32.360 "message": "Invalid cntlid range [6-5]" 00:13:32.360 }' 00:13:32.360 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:32.360 { 00:13:32.360 "nqn": "nqn.2016-06.io.spdk:cnode31103", 00:13:32.360 "min_cntlid": 6, 00:13:32.360 "max_cntlid": 5, 00:13:32.360 "method": "nvmf_create_subsystem", 00:13:32.360 "req_id": 1 00:13:32.360 } 00:13:32.360 Got JSON-RPC error response 00:13:32.360 response: 00:13:32.360 { 00:13:32.360 "code": -32602, 00:13:32.360 "message": "Invalid cntlid range [6-5]" 00:13:32.360 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.360 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:32.360 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:32.360 { 00:13:32.360 "name": "foobar", 00:13:32.360 "method": "nvmf_delete_target", 00:13:32.360 "req_id": 1 00:13:32.360 } 00:13:32.360 Got JSON-RPC error response 00:13:32.360 response: 00:13:32.360 { 00:13:32.360 "code": -32602, 00:13:32.360 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:32.360 }' 00:13:32.360 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:32.360 { 00:13:32.360 "name": "foobar", 00:13:32.360 "method": "nvmf_delete_target", 00:13:32.360 "req_id": 1 00:13:32.360 } 00:13:32.360 Got JSON-RPC error response 00:13:32.360 response: 00:13:32.360 { 00:13:32.360 "code": -32602, 00:13:32.360 "message": "The specified target doesn't exist, cannot delete it." 00:13:32.360 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:32.360 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:32.360 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:32.360 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:32.360 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:32.360 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:32.360 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:32.360 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:32.360 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:32.360 rmmod nvme_tcp 00:13:32.360 rmmod nvme_fabrics 00:13:32.360 rmmod nvme_keyring 00:13:32.360 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:32.360 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:32.360 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:32.360 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 865480 ']' 00:13:32.360 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 865480 00:13:32.360 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 865480 ']' 00:13:32.360 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 865480 00:13:32.360 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:13:32.361 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:32.361 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 865480 00:13:32.619 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:32.619 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:32.619 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 865480' 00:13:32.619 killing process with pid 865480 00:13:32.619 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 865480 00:13:32.619 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 865480 00:13:32.878 19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:32.878 19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:32.878 19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:32.878 19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:32.878 19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:32.878 19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.878 19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:32.878 19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.781 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:34.781 00:13:34.781 real 0m9.662s 00:13:34.781 user 0m22.823s 00:13:34.781 sys 0m2.746s 00:13:34.781 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:34.781 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:34.781 ************************************ 00:13:34.781 END TEST nvmf_invalid 00:13:34.781 ************************************ 00:13:34.781 19:04:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:34.781 19:04:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:34.781 19:04:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:34.782 19:04:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:34.782 ************************************ 00:13:34.782 START TEST nvmf_connect_stress 00:13:34.782 ************************************ 00:13:34.782 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:35.040 * Looking for test storage... 00:13:35.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:35.040 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:35.041 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.573 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:37.573 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:37.573 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:37.573 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:37.573 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:37.573 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:37.573 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:37.573 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:37.573 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:37.573 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:37.573 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:37.573 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:37.573 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:37.573 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:37.573 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:37.573 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:37.573 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:37.573 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:37.573 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:37.573 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:37.573 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:37.573 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:37.573 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:37.573 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:37.574 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:37.574 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:37.574 Found net devices under 0000:09:00.0: cvl_0_0 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:37.574 Found net devices under 0000:09:00.1: cvl_0_1 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:37.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:13:37.574 00:13:37.574 --- 10.0.0.2 ping statistics --- 00:13:37.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.574 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:37.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:13:37.574 00:13:37.574 --- 10.0.0.1 ping statistics --- 00:13:37.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.574 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=868523 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 868523 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 868523 ']' 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:37.574 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.575 [2024-07-25 19:04:29.945318] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:37.575 [2024-07-25 19:04:29.945411] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.575 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.575 [2024-07-25 19:04:30.036225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:37.832 [2024-07-25 19:04:30.150026] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.832 [2024-07-25 19:04:30.150075] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.832 [2024-07-25 19:04:30.150112] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.832 [2024-07-25 19:04:30.150126] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.832 [2024-07-25 19:04:30.150136] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.832 [2024-07-25 19:04:30.152125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.832 [2024-07-25 19:04:30.152255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:37.832 [2024-07-25 19:04:30.152259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.764 19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:38.764 19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:38.764 19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:38.764 19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:38.764 19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.764 19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.764 19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:38.764 19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.764 19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.764 [2024-07-25 19:04:30.990965] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.764 [2024-07-25 19:04:31.022254] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.764 NULL1 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=868678 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.764 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:38.764 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.765 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.765 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.022 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.022 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:39.022 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.022 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.022 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.280 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.280 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:39.280 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.280 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.280 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.846 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.846 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:39.846 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.846 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.846 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.104 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.104 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:40.104 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.104 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.104 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.362 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.362 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:40.362 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.362 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.362 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.620 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.620 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:40.620 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.620 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.620 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.877 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.877 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:40.877 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.877 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.877 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.440 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.440 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:41.440 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.440 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.440 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.697 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.697 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:41.697 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.697 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.697 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.987 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.987 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:41.987 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.987 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.987 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.270 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.270 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:42.270 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.270 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.270 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.528 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.528 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:42.528 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.528 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.528 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.094 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.094 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:43.094 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.094 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.094 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.351 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.352 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:43.352 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.352 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.352 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.609 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.609 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:43.609 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.609 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.609 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.867 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.867 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:43.867 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.867 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.867 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.125 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.125 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:44.125 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.125 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.125 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.690 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.690 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:44.690 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.690 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.690 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.948 19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.948 19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:44.948 19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.948 19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.948 19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.206 19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.206 19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:45.206 19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.206 19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.206 19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.464 19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.464 19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:45.464 19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.464 19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.464 19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.722 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.722 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:45.722 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.722 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.722 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.287 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.287 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:46.287 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.288 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.288 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.545 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.545 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:46.545 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.545 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.545 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.803 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.803 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:46.803 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.803 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.803 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.061 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.061 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:47.061 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.061 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.061 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.319 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.319 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:47.319 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.319 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.319 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.884 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.884 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:47.884 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.884 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.884 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.142 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.142 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:48.142 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.142 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.142 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.399 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.399 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:48.399 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.399 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.399 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.657 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.657 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:48.657 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.657 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.657 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.915 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:48.915 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.915 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 868678 00:13:48.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (868678) - No such process 00:13:48.915 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 868678 00:13:48.915 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:48.915 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:48.915 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:48.915 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:48.915 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:48.915 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:48.915 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:48.915 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:48.915 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:48.915 rmmod nvme_tcp 00:13:49.173 rmmod nvme_fabrics 00:13:49.173 rmmod nvme_keyring 00:13:49.173 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:49.173 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:49.173 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:49.173 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 868523 ']' 00:13:49.173 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 868523 00:13:49.173 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 868523 ']' 00:13:49.173 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 868523 00:13:49.173 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:13:49.173 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:49.173 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 868523 00:13:49.173 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:49.173 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:49.173 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 868523' 00:13:49.173 killing process with pid 868523 00:13:49.173 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 868523 00:13:49.173 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 868523 00:13:49.432 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:49.432 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:49.432 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:49.432 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:49.432 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:49.432 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.432 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:49.432 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.338 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:51.338 00:13:51.338 real 0m16.553s 00:13:51.338 user 0m40.789s 00:13:51.338 sys 0m6.401s 00:13:51.338 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:51.338 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.338 ************************************ 00:13:51.338 END TEST nvmf_connect_stress 00:13:51.338 ************************************ 00:13:51.338 19:04:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:51.338 19:04:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:51.338 19:04:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:51.338 19:04:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:51.597 ************************************ 00:13:51.597 START TEST nvmf_fused_ordering 00:13:51.597 ************************************ 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:51.597 * Looking for test storage... 00:13:51.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:13:51.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:54.129 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:54.129 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:54.129 Found net devices under 0000:09:00.0: cvl_0_0 00:13:54.129 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:54.130 Found net devices under 0000:09:00.1: cvl_0_1 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:54.130 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:54.389 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:54.389 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:54.389 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:54.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:54.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:13:54.389 00:13:54.389 --- 10.0.0.2 ping statistics --- 00:13:54.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.389 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:13:54.389 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:54.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:54.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:13:54.389 00:13:54.389 --- 10.0.0.1 ping statistics --- 00:13:54.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.389 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:13:54.389 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:54.389 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:13:54.389 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:54.389 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:54.389 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:54.389 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:54.389 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:54.389 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:54.389 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:54.389 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:54.389 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:54.389 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:54.389 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:54.389 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=872239 00:13:54.389 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:54.389 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 872239 00:13:54.389 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 872239 ']' 00:13:54.389 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.389 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:54.389 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.389 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:54.389 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:54.389 [2024-07-25 19:04:46.733437] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:54.389 [2024-07-25 19:04:46.733531] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.389 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.389 [2024-07-25 19:04:46.810906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.648 [2024-07-25 19:04:46.922227] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.648 [2024-07-25 19:04:46.922283] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.648 [2024-07-25 19:04:46.922312] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.648 [2024-07-25 19:04:46.922324] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.648 [2024-07-25 19:04:46.922334] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.648 [2024-07-25 19:04:46.922360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:54.648 [2024-07-25 19:04:47.081398] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:54.648 [2024-07-25 19:04:47.097586] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:54.648 NULL1 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.648 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:54.906 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.906 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:54.906 [2024-07-25 19:04:47.143536] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:54.906 [2024-07-25 19:04:47.143583] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid872258 ] 00:13:54.906 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.472 Attached to nqn.2016-06.io.spdk:cnode1 00:13:55.472 Namespace ID: 1 size: 1GB 00:13:55.472 fused_ordering(0) 00:13:55.472 fused_ordering(1) 00:13:55.472 fused_ordering(2) 00:13:55.472 fused_ordering(3) 00:13:55.472 fused_ordering(4) 00:13:55.472 fused_ordering(5) 00:13:55.472 fused_ordering(6) 00:13:55.472 fused_ordering(7) 00:13:55.472 fused_ordering(8) 00:13:55.472 fused_ordering(9) 00:13:55.472 fused_ordering(10) 00:13:55.472 fused_ordering(11) 00:13:55.472 fused_ordering(12) 00:13:55.472 fused_ordering(13) 00:13:55.472 fused_ordering(14) 00:13:55.472 fused_ordering(15) 00:13:55.472 fused_ordering(16) 00:13:55.472 fused_ordering(17) 00:13:55.472 fused_ordering(18) 00:13:55.472 fused_ordering(19) 00:13:55.472 fused_ordering(20) 00:13:55.472 fused_ordering(21) 00:13:55.472 fused_ordering(22) 00:13:55.472 fused_ordering(23) 00:13:55.472 fused_ordering(24) 00:13:55.473 fused_ordering(25) 00:13:55.473 fused_ordering(26) 00:13:55.473 fused_ordering(27) 00:13:55.473 fused_ordering(28) 00:13:55.473 fused_ordering(29) 00:13:55.473 fused_ordering(30) 00:13:55.473 fused_ordering(31) 00:13:55.473 fused_ordering(32) 00:13:55.473 fused_ordering(33) 00:13:55.473 fused_ordering(34) 00:13:55.473 fused_ordering(35) 00:13:55.473 fused_ordering(36) 00:13:55.473 fused_ordering(37) 00:13:55.473 fused_ordering(38) 00:13:55.473 fused_ordering(39) 00:13:55.473 fused_ordering(40) 00:13:55.473 fused_ordering(41) 00:13:55.473 fused_ordering(42) 00:13:55.473 fused_ordering(43) 00:13:55.473 fused_ordering(44) 00:13:55.473 fused_ordering(45) 00:13:55.473 fused_ordering(46) 00:13:55.473 fused_ordering(47) 00:13:55.473 fused_ordering(48) 00:13:55.473 fused_ordering(49) 00:13:55.473 fused_ordering(50) 00:13:55.473 fused_ordering(51) 00:13:55.473 fused_ordering(52) 00:13:55.473 fused_ordering(53) 00:13:55.473 fused_ordering(54) 00:13:55.473 fused_ordering(55) 00:13:55.473 fused_ordering(56) 00:13:55.473 fused_ordering(57) 00:13:55.473 fused_ordering(58) 00:13:55.473 fused_ordering(59) 00:13:55.473 fused_ordering(60) 00:13:55.473 fused_ordering(61) 00:13:55.473 fused_ordering(62) 00:13:55.473 fused_ordering(63) 00:13:55.473 fused_ordering(64) 00:13:55.473 fused_ordering(65) 00:13:55.473 fused_ordering(66) 00:13:55.473 fused_ordering(67) 00:13:55.473 fused_ordering(68) 00:13:55.473 fused_ordering(69) 00:13:55.473 fused_ordering(70) 00:13:55.473 fused_ordering(71) 00:13:55.473 fused_ordering(72) 00:13:55.473 fused_ordering(73) 00:13:55.473 fused_ordering(74) 00:13:55.473 fused_ordering(75) 00:13:55.473 fused_ordering(76) 00:13:55.473 fused_ordering(77) 00:13:55.473 fused_ordering(78) 00:13:55.473 fused_ordering(79) 00:13:55.473 fused_ordering(80) 00:13:55.473 fused_ordering(81) 00:13:55.473 fused_ordering(82) 00:13:55.473 fused_ordering(83) 00:13:55.473 fused_ordering(84) 00:13:55.473 fused_ordering(85) 00:13:55.473 fused_ordering(86) 00:13:55.473 fused_ordering(87) 00:13:55.473 fused_ordering(88) 00:13:55.473 fused_ordering(89) 00:13:55.473 fused_ordering(90) 00:13:55.473 fused_ordering(91) 00:13:55.473 fused_ordering(92) 00:13:55.473 fused_ordering(93) 00:13:55.473 fused_ordering(94) 00:13:55.473 fused_ordering(95) 00:13:55.473 fused_ordering(96) 00:13:55.473 fused_ordering(97) 00:13:55.473 fused_ordering(98) 00:13:55.473 fused_ordering(99) 00:13:55.473 fused_ordering(100) 00:13:55.473 fused_ordering(101) 00:13:55.473 fused_ordering(102) 00:13:55.473 fused_ordering(103) 00:13:55.473 fused_ordering(104) 00:13:55.473 fused_ordering(105) 00:13:55.473 fused_ordering(106) 00:13:55.473 fused_ordering(107) 00:13:55.473 fused_ordering(108) 00:13:55.473 fused_ordering(109) 00:13:55.473 fused_ordering(110) 00:13:55.473 fused_ordering(111) 00:13:55.473 fused_ordering(112) 00:13:55.473 fused_ordering(113) 00:13:55.473 fused_ordering(114) 00:13:55.473 fused_ordering(115) 00:13:55.473 fused_ordering(116) 00:13:55.473 fused_ordering(117) 00:13:55.473 fused_ordering(118) 00:13:55.473 fused_ordering(119) 00:13:55.473 fused_ordering(120) 00:13:55.473 fused_ordering(121) 00:13:55.473 fused_ordering(122) 00:13:55.473 fused_ordering(123) 00:13:55.473 fused_ordering(124) 00:13:55.473 fused_ordering(125) 00:13:55.473 fused_ordering(126) 00:13:55.473 fused_ordering(127) 00:13:55.473 fused_ordering(128) 00:13:55.473 fused_ordering(129) 00:13:55.473 fused_ordering(130) 00:13:55.473 fused_ordering(131) 00:13:55.473 fused_ordering(132) 00:13:55.473 fused_ordering(133) 00:13:55.473 fused_ordering(134) 00:13:55.473 fused_ordering(135) 00:13:55.473 fused_ordering(136) 00:13:55.473 fused_ordering(137) 00:13:55.473 fused_ordering(138) 00:13:55.473 fused_ordering(139) 00:13:55.473 fused_ordering(140) 00:13:55.473 fused_ordering(141) 00:13:55.473 fused_ordering(142) 00:13:55.473 fused_ordering(143) 00:13:55.473 fused_ordering(144) 00:13:55.473 fused_ordering(145) 00:13:55.473 fused_ordering(146) 00:13:55.473 fused_ordering(147) 00:13:55.473 fused_ordering(148) 00:13:55.473 fused_ordering(149) 00:13:55.473 fused_ordering(150) 00:13:55.473 fused_ordering(151) 00:13:55.473 fused_ordering(152) 00:13:55.473 fused_ordering(153) 00:13:55.473 fused_ordering(154) 00:13:55.473 fused_ordering(155) 00:13:55.473 fused_ordering(156) 00:13:55.473 fused_ordering(157) 00:13:55.473 fused_ordering(158) 00:13:55.473 fused_ordering(159) 00:13:55.473 fused_ordering(160) 00:13:55.473 fused_ordering(161) 00:13:55.473 fused_ordering(162) 00:13:55.473 fused_ordering(163) 00:13:55.473 fused_ordering(164) 00:13:55.473 fused_ordering(165) 00:13:55.473 fused_ordering(166) 00:13:55.473 fused_ordering(167) 00:13:55.473 fused_ordering(168) 00:13:55.473 fused_ordering(169) 00:13:55.473 fused_ordering(170) 00:13:55.473 fused_ordering(171) 00:13:55.473 fused_ordering(172) 00:13:55.473 fused_ordering(173) 00:13:55.473 fused_ordering(174) 00:13:55.473 fused_ordering(175) 00:13:55.473 fused_ordering(176) 00:13:55.473 fused_ordering(177) 00:13:55.473 fused_ordering(178) 00:13:55.473 fused_ordering(179) 00:13:55.473 fused_ordering(180) 00:13:55.473 fused_ordering(181) 00:13:55.473 fused_ordering(182) 00:13:55.473 fused_ordering(183) 00:13:55.473 fused_ordering(184) 00:13:55.473 fused_ordering(185) 00:13:55.473 fused_ordering(186) 00:13:55.473 fused_ordering(187) 00:13:55.473 fused_ordering(188) 00:13:55.473 fused_ordering(189) 00:13:55.473 fused_ordering(190) 00:13:55.473 fused_ordering(191) 00:13:55.473 fused_ordering(192) 00:13:55.473 fused_ordering(193) 00:13:55.473 fused_ordering(194) 00:13:55.473 fused_ordering(195) 00:13:55.473 fused_ordering(196) 00:13:55.473 fused_ordering(197) 00:13:55.473 fused_ordering(198) 00:13:55.473 fused_ordering(199) 00:13:55.473 fused_ordering(200) 00:13:55.473 fused_ordering(201) 00:13:55.473 fused_ordering(202) 00:13:55.473 fused_ordering(203) 00:13:55.473 fused_ordering(204) 00:13:55.473 fused_ordering(205) 00:13:56.039 fused_ordering(206) 00:13:56.039 fused_ordering(207) 00:13:56.039 fused_ordering(208) 00:13:56.039 fused_ordering(209) 00:13:56.039 fused_ordering(210) 00:13:56.039 fused_ordering(211) 00:13:56.039 fused_ordering(212) 00:13:56.039 fused_ordering(213) 00:13:56.039 fused_ordering(214) 00:13:56.039 fused_ordering(215) 00:13:56.039 fused_ordering(216) 00:13:56.039 fused_ordering(217) 00:13:56.039 fused_ordering(218) 00:13:56.039 fused_ordering(219) 00:13:56.039 fused_ordering(220) 00:13:56.039 fused_ordering(221) 00:13:56.039 fused_ordering(222) 00:13:56.039 fused_ordering(223) 00:13:56.039 fused_ordering(224) 00:13:56.039 fused_ordering(225) 00:13:56.039 fused_ordering(226) 00:13:56.039 fused_ordering(227) 00:13:56.039 fused_ordering(228) 00:13:56.039 fused_ordering(229) 00:13:56.039 fused_ordering(230) 00:13:56.039 fused_ordering(231) 00:13:56.039 fused_ordering(232) 00:13:56.039 fused_ordering(233) 00:13:56.039 fused_ordering(234) 00:13:56.039 fused_ordering(235) 00:13:56.039 fused_ordering(236) 00:13:56.039 fused_ordering(237) 00:13:56.039 fused_ordering(238) 00:13:56.039 fused_ordering(239) 00:13:56.039 fused_ordering(240) 00:13:56.039 fused_ordering(241) 00:13:56.039 fused_ordering(242) 00:13:56.039 fused_ordering(243) 00:13:56.039 fused_ordering(244) 00:13:56.039 fused_ordering(245) 00:13:56.039 fused_ordering(246) 00:13:56.039 fused_ordering(247) 00:13:56.039 fused_ordering(248) 00:13:56.039 fused_ordering(249) 00:13:56.039 fused_ordering(250) 00:13:56.039 fused_ordering(251) 00:13:56.039 fused_ordering(252) 00:13:56.039 fused_ordering(253) 00:13:56.039 fused_ordering(254) 00:13:56.039 fused_ordering(255) 00:13:56.039 fused_ordering(256) 00:13:56.039 fused_ordering(257) 00:13:56.039 fused_ordering(258) 00:13:56.039 fused_ordering(259) 00:13:56.039 fused_ordering(260) 00:13:56.039 fused_ordering(261) 00:13:56.039 fused_ordering(262) 00:13:56.039 fused_ordering(263) 00:13:56.039 fused_ordering(264) 00:13:56.039 fused_ordering(265) 00:13:56.039 fused_ordering(266) 00:13:56.039 fused_ordering(267) 00:13:56.039 fused_ordering(268) 00:13:56.039 fused_ordering(269) 00:13:56.039 fused_ordering(270) 00:13:56.039 fused_ordering(271) 00:13:56.039 fused_ordering(272) 00:13:56.039 fused_ordering(273) 00:13:56.039 fused_ordering(274) 00:13:56.039 fused_ordering(275) 00:13:56.039 fused_ordering(276) 00:13:56.039 fused_ordering(277) 00:13:56.039 fused_ordering(278) 00:13:56.039 fused_ordering(279) 00:13:56.039 fused_ordering(280) 00:13:56.039 fused_ordering(281) 00:13:56.039 fused_ordering(282) 00:13:56.039 fused_ordering(283) 00:13:56.039 fused_ordering(284) 00:13:56.039 fused_ordering(285) 00:13:56.039 fused_ordering(286) 00:13:56.039 fused_ordering(287) 00:13:56.039 fused_ordering(288) 00:13:56.039 fused_ordering(289) 00:13:56.039 fused_ordering(290) 00:13:56.039 fused_ordering(291) 00:13:56.039 fused_ordering(292) 00:13:56.039 fused_ordering(293) 00:13:56.039 fused_ordering(294) 00:13:56.039 fused_ordering(295) 00:13:56.039 fused_ordering(296) 00:13:56.039 fused_ordering(297) 00:13:56.039 fused_ordering(298) 00:13:56.039 fused_ordering(299) 00:13:56.039 fused_ordering(300) 00:13:56.039 fused_ordering(301) 00:13:56.039 fused_ordering(302) 00:13:56.039 fused_ordering(303) 00:13:56.039 fused_ordering(304) 00:13:56.039 fused_ordering(305) 00:13:56.039 fused_ordering(306) 00:13:56.039 fused_ordering(307) 00:13:56.039 fused_ordering(308) 00:13:56.039 fused_ordering(309) 00:13:56.039 fused_ordering(310) 00:13:56.039 fused_ordering(311) 00:13:56.039 fused_ordering(312) 00:13:56.039 fused_ordering(313) 00:13:56.039 fused_ordering(314) 00:13:56.039 fused_ordering(315) 00:13:56.039 fused_ordering(316) 00:13:56.039 fused_ordering(317) 00:13:56.039 fused_ordering(318) 00:13:56.039 fused_ordering(319) 00:13:56.039 fused_ordering(320) 00:13:56.039 fused_ordering(321) 00:13:56.039 fused_ordering(322) 00:13:56.039 fused_ordering(323) 00:13:56.039 fused_ordering(324) 00:13:56.039 fused_ordering(325) 00:13:56.039 fused_ordering(326) 00:13:56.039 fused_ordering(327) 00:13:56.039 fused_ordering(328) 00:13:56.039 fused_ordering(329) 00:13:56.039 fused_ordering(330) 00:13:56.039 fused_ordering(331) 00:13:56.039 fused_ordering(332) 00:13:56.039 fused_ordering(333) 00:13:56.039 fused_ordering(334) 00:13:56.039 fused_ordering(335) 00:13:56.039 fused_ordering(336) 00:13:56.039 fused_ordering(337) 00:13:56.039 fused_ordering(338) 00:13:56.039 fused_ordering(339) 00:13:56.039 fused_ordering(340) 00:13:56.039 fused_ordering(341) 00:13:56.039 fused_ordering(342) 00:13:56.039 fused_ordering(343) 00:13:56.039 fused_ordering(344) 00:13:56.039 fused_ordering(345) 00:13:56.039 fused_ordering(346) 00:13:56.039 fused_ordering(347) 00:13:56.039 fused_ordering(348) 00:13:56.039 fused_ordering(349) 00:13:56.039 fused_ordering(350) 00:13:56.039 fused_ordering(351) 00:13:56.039 fused_ordering(352) 00:13:56.039 fused_ordering(353) 00:13:56.039 fused_ordering(354) 00:13:56.039 fused_ordering(355) 00:13:56.039 fused_ordering(356) 00:13:56.039 fused_ordering(357) 00:13:56.039 fused_ordering(358) 00:13:56.039 fused_ordering(359) 00:13:56.039 fused_ordering(360) 00:13:56.039 fused_ordering(361) 00:13:56.039 fused_ordering(362) 00:13:56.039 fused_ordering(363) 00:13:56.039 fused_ordering(364) 00:13:56.039 fused_ordering(365) 00:13:56.039 fused_ordering(366) 00:13:56.039 fused_ordering(367) 00:13:56.039 fused_ordering(368) 00:13:56.039 fused_ordering(369) 00:13:56.039 fused_ordering(370) 00:13:56.039 fused_ordering(371) 00:13:56.039 fused_ordering(372) 00:13:56.039 fused_ordering(373) 00:13:56.039 fused_ordering(374) 00:13:56.039 fused_ordering(375) 00:13:56.039 fused_ordering(376) 00:13:56.039 fused_ordering(377) 00:13:56.039 fused_ordering(378) 00:13:56.039 fused_ordering(379) 00:13:56.039 fused_ordering(380) 00:13:56.039 fused_ordering(381) 00:13:56.039 fused_ordering(382) 00:13:56.039 fused_ordering(383) 00:13:56.039 fused_ordering(384) 00:13:56.039 fused_ordering(385) 00:13:56.039 fused_ordering(386) 00:13:56.039 fused_ordering(387) 00:13:56.039 fused_ordering(388) 00:13:56.039 fused_ordering(389) 00:13:56.039 fused_ordering(390) 00:13:56.039 fused_ordering(391) 00:13:56.039 fused_ordering(392) 00:13:56.039 fused_ordering(393) 00:13:56.039 fused_ordering(394) 00:13:56.039 fused_ordering(395) 00:13:56.039 fused_ordering(396) 00:13:56.039 fused_ordering(397) 00:13:56.039 fused_ordering(398) 00:13:56.039 fused_ordering(399) 00:13:56.039 fused_ordering(400) 00:13:56.039 fused_ordering(401) 00:13:56.039 fused_ordering(402) 00:13:56.039 fused_ordering(403) 00:13:56.039 fused_ordering(404) 00:13:56.039 fused_ordering(405) 00:13:56.039 fused_ordering(406) 00:13:56.039 fused_ordering(407) 00:13:56.039 fused_ordering(408) 00:13:56.039 fused_ordering(409) 00:13:56.039 fused_ordering(410) 00:13:56.605 fused_ordering(411) 00:13:56.605 fused_ordering(412) 00:13:56.605 fused_ordering(413) 00:13:56.605 fused_ordering(414) 00:13:56.605 fused_ordering(415) 00:13:56.605 fused_ordering(416) 00:13:56.605 fused_ordering(417) 00:13:56.605 fused_ordering(418) 00:13:56.605 fused_ordering(419) 00:13:56.605 fused_ordering(420) 00:13:56.605 fused_ordering(421) 00:13:56.605 fused_ordering(422) 00:13:56.605 fused_ordering(423) 00:13:56.605 fused_ordering(424) 00:13:56.605 fused_ordering(425) 00:13:56.605 fused_ordering(426) 00:13:56.605 fused_ordering(427) 00:13:56.605 fused_ordering(428) 00:13:56.605 fused_ordering(429) 00:13:56.605 fused_ordering(430) 00:13:56.605 fused_ordering(431) 00:13:56.605 fused_ordering(432) 00:13:56.605 fused_ordering(433) 00:13:56.605 fused_ordering(434) 00:13:56.605 fused_ordering(435) 00:13:56.605 fused_ordering(436) 00:13:56.605 fused_ordering(437) 00:13:56.605 fused_ordering(438) 00:13:56.605 fused_ordering(439) 00:13:56.605 fused_ordering(440) 00:13:56.605 fused_ordering(441) 00:13:56.605 fused_ordering(442) 00:13:56.605 fused_ordering(443) 00:13:56.605 fused_ordering(444) 00:13:56.605 fused_ordering(445) 00:13:56.605 fused_ordering(446) 00:13:56.605 fused_ordering(447) 00:13:56.605 fused_ordering(448) 00:13:56.605 fused_ordering(449) 00:13:56.605 fused_ordering(450) 00:13:56.605 fused_ordering(451) 00:13:56.605 fused_ordering(452) 00:13:56.605 fused_ordering(453) 00:13:56.605 fused_ordering(454) 00:13:56.605 fused_ordering(455) 00:13:56.605 fused_ordering(456) 00:13:56.605 fused_ordering(457) 00:13:56.605 fused_ordering(458) 00:13:56.605 fused_ordering(459) 00:13:56.605 fused_ordering(460) 00:13:56.605 fused_ordering(461) 00:13:56.605 fused_ordering(462) 00:13:56.605 fused_ordering(463) 00:13:56.605 fused_ordering(464) 00:13:56.605 fused_ordering(465) 00:13:56.605 fused_ordering(466) 00:13:56.605 fused_ordering(467) 00:13:56.605 fused_ordering(468) 00:13:56.605 fused_ordering(469) 00:13:56.605 fused_ordering(470) 00:13:56.605 fused_ordering(471) 00:13:56.605 fused_ordering(472) 00:13:56.605 fused_ordering(473) 00:13:56.605 fused_ordering(474) 00:13:56.605 fused_ordering(475) 00:13:56.605 fused_ordering(476) 00:13:56.605 fused_ordering(477) 00:13:56.605 fused_ordering(478) 00:13:56.605 fused_ordering(479) 00:13:56.605 fused_ordering(480) 00:13:56.605 fused_ordering(481) 00:13:56.605 fused_ordering(482) 00:13:56.605 fused_ordering(483) 00:13:56.605 fused_ordering(484) 00:13:56.605 fused_ordering(485) 00:13:56.605 fused_ordering(486) 00:13:56.605 fused_ordering(487) 00:13:56.605 fused_ordering(488) 00:13:56.605 fused_ordering(489) 00:13:56.605 fused_ordering(490) 00:13:56.605 fused_ordering(491) 00:13:56.605 fused_ordering(492) 00:13:56.605 fused_ordering(493) 00:13:56.605 fused_ordering(494) 00:13:56.605 fused_ordering(495) 00:13:56.605 fused_ordering(496) 00:13:56.605 fused_ordering(497) 00:13:56.605 fused_ordering(498) 00:13:56.605 fused_ordering(499) 00:13:56.605 fused_ordering(500) 00:13:56.605 fused_ordering(501) 00:13:56.605 fused_ordering(502) 00:13:56.605 fused_ordering(503) 00:13:56.605 fused_ordering(504) 00:13:56.605 fused_ordering(505) 00:13:56.605 fused_ordering(506) 00:13:56.605 fused_ordering(507) 00:13:56.605 fused_ordering(508) 00:13:56.605 fused_ordering(509) 00:13:56.605 fused_ordering(510) 00:13:56.605 fused_ordering(511) 00:13:56.605 fused_ordering(512) 00:13:56.605 fused_ordering(513) 00:13:56.605 fused_ordering(514) 00:13:56.605 fused_ordering(515) 00:13:56.605 fused_ordering(516) 00:13:56.605 fused_ordering(517) 00:13:56.605 fused_ordering(518) 00:13:56.605 fused_ordering(519) 00:13:56.605 fused_ordering(520) 00:13:56.605 fused_ordering(521) 00:13:56.605 fused_ordering(522) 00:13:56.605 fused_ordering(523) 00:13:56.605 fused_ordering(524) 00:13:56.605 fused_ordering(525) 00:13:56.605 fused_ordering(526) 00:13:56.605 fused_ordering(527) 00:13:56.605 fused_ordering(528) 00:13:56.605 fused_ordering(529) 00:13:56.605 fused_ordering(530) 00:13:56.605 fused_ordering(531) 00:13:56.605 fused_ordering(532) 00:13:56.605 fused_ordering(533) 00:13:56.605 fused_ordering(534) 00:13:56.605 fused_ordering(535) 00:13:56.605 fused_ordering(536) 00:13:56.605 fused_ordering(537) 00:13:56.605 fused_ordering(538) 00:13:56.605 fused_ordering(539) 00:13:56.605 fused_ordering(540) 00:13:56.605 fused_ordering(541) 00:13:56.605 fused_ordering(542) 00:13:56.605 fused_ordering(543) 00:13:56.605 fused_ordering(544) 00:13:56.605 fused_ordering(545) 00:13:56.605 fused_ordering(546) 00:13:56.606 fused_ordering(547) 00:13:56.606 fused_ordering(548) 00:13:56.606 fused_ordering(549) 00:13:56.606 fused_ordering(550) 00:13:56.606 fused_ordering(551) 00:13:56.606 fused_ordering(552) 00:13:56.606 fused_ordering(553) 00:13:56.606 fused_ordering(554) 00:13:56.606 fused_ordering(555) 00:13:56.606 fused_ordering(556) 00:13:56.606 fused_ordering(557) 00:13:56.606 fused_ordering(558) 00:13:56.606 fused_ordering(559) 00:13:56.606 fused_ordering(560) 00:13:56.606 fused_ordering(561) 00:13:56.606 fused_ordering(562) 00:13:56.606 fused_ordering(563) 00:13:56.606 fused_ordering(564) 00:13:56.606 fused_ordering(565) 00:13:56.606 fused_ordering(566) 00:13:56.606 fused_ordering(567) 00:13:56.606 fused_ordering(568) 00:13:56.606 fused_ordering(569) 00:13:56.606 fused_ordering(570) 00:13:56.606 fused_ordering(571) 00:13:56.606 fused_ordering(572) 00:13:56.606 fused_ordering(573) 00:13:56.606 fused_ordering(574) 00:13:56.606 fused_ordering(575) 00:13:56.606 fused_ordering(576) 00:13:56.606 fused_ordering(577) 00:13:56.606 fused_ordering(578) 00:13:56.606 fused_ordering(579) 00:13:56.606 fused_ordering(580) 00:13:56.606 fused_ordering(581) 00:13:56.606 fused_ordering(582) 00:13:56.606 fused_ordering(583) 00:13:56.606 fused_ordering(584) 00:13:56.606 fused_ordering(585) 00:13:56.606 fused_ordering(586) 00:13:56.606 fused_ordering(587) 00:13:56.606 fused_ordering(588) 00:13:56.606 fused_ordering(589) 00:13:56.606 fused_ordering(590) 00:13:56.606 fused_ordering(591) 00:13:56.606 fused_ordering(592) 00:13:56.606 fused_ordering(593) 00:13:56.606 fused_ordering(594) 00:13:56.606 fused_ordering(595) 00:13:56.606 fused_ordering(596) 00:13:56.606 fused_ordering(597) 00:13:56.606 fused_ordering(598) 00:13:56.606 fused_ordering(599) 00:13:56.606 fused_ordering(600) 00:13:56.606 fused_ordering(601) 00:13:56.606 fused_ordering(602) 00:13:56.606 fused_ordering(603) 00:13:56.606 fused_ordering(604) 00:13:56.606 fused_ordering(605) 00:13:56.606 fused_ordering(606) 00:13:56.606 fused_ordering(607) 00:13:56.606 fused_ordering(608) 00:13:56.606 fused_ordering(609) 00:13:56.606 fused_ordering(610) 00:13:56.606 fused_ordering(611) 00:13:56.606 fused_ordering(612) 00:13:56.606 fused_ordering(613) 00:13:56.606 fused_ordering(614) 00:13:56.606 fused_ordering(615) 00:13:57.171 fused_ordering(616) 00:13:57.171 fused_ordering(617) 00:13:57.171 fused_ordering(618) 00:13:57.171 fused_ordering(619) 00:13:57.171 fused_ordering(620) 00:13:57.171 fused_ordering(621) 00:13:57.171 fused_ordering(622) 00:13:57.171 fused_ordering(623) 00:13:57.171 fused_ordering(624) 00:13:57.171 fused_ordering(625) 00:13:57.171 fused_ordering(626) 00:13:57.171 fused_ordering(627) 00:13:57.171 fused_ordering(628) 00:13:57.171 fused_ordering(629) 00:13:57.172 fused_ordering(630) 00:13:57.172 fused_ordering(631) 00:13:57.172 fused_ordering(632) 00:13:57.172 fused_ordering(633) 00:13:57.172 fused_ordering(634) 00:13:57.172 fused_ordering(635) 00:13:57.172 fused_ordering(636) 00:13:57.172 fused_ordering(637) 00:13:57.172 fused_ordering(638) 00:13:57.172 fused_ordering(639) 00:13:57.172 fused_ordering(640) 00:13:57.172 fused_ordering(641) 00:13:57.172 fused_ordering(642) 00:13:57.172 fused_ordering(643) 00:13:57.172 fused_ordering(644) 00:13:57.172 fused_ordering(645) 00:13:57.172 fused_ordering(646) 00:13:57.172 fused_ordering(647) 00:13:57.172 fused_ordering(648) 00:13:57.172 fused_ordering(649) 00:13:57.172 fused_ordering(650) 00:13:57.172 fused_ordering(651) 00:13:57.172 fused_ordering(652) 00:13:57.172 fused_ordering(653) 00:13:57.172 fused_ordering(654) 00:13:57.172 fused_ordering(655) 00:13:57.172 fused_ordering(656) 00:13:57.172 fused_ordering(657) 00:13:57.172 fused_ordering(658) 00:13:57.172 fused_ordering(659) 00:13:57.172 fused_ordering(660) 00:13:57.172 fused_ordering(661) 00:13:57.172 fused_ordering(662) 00:13:57.172 fused_ordering(663) 00:13:57.172 fused_ordering(664) 00:13:57.172 fused_ordering(665) 00:13:57.172 fused_ordering(666) 00:13:57.172 fused_ordering(667) 00:13:57.172 fused_ordering(668) 00:13:57.172 fused_ordering(669) 00:13:57.172 fused_ordering(670) 00:13:57.172 fused_ordering(671) 00:13:57.172 fused_ordering(672) 00:13:57.172 fused_ordering(673) 00:13:57.172 fused_ordering(674) 00:13:57.172 fused_ordering(675) 00:13:57.172 fused_ordering(676) 00:13:57.172 fused_ordering(677) 00:13:57.172 fused_ordering(678) 00:13:57.172 fused_ordering(679) 00:13:57.172 fused_ordering(680) 00:13:57.172 fused_ordering(681) 00:13:57.172 fused_ordering(682) 00:13:57.172 fused_ordering(683) 00:13:57.172 fused_ordering(684) 00:13:57.172 fused_ordering(685) 00:13:57.172 fused_ordering(686) 00:13:57.172 fused_ordering(687) 00:13:57.172 fused_ordering(688) 00:13:57.172 fused_ordering(689) 00:13:57.172 fused_ordering(690) 00:13:57.172 fused_ordering(691) 00:13:57.172 fused_ordering(692) 00:13:57.172 fused_ordering(693) 00:13:57.172 fused_ordering(694) 00:13:57.172 fused_ordering(695) 00:13:57.172 fused_ordering(696) 00:13:57.172 fused_ordering(697) 00:13:57.172 fused_ordering(698) 00:13:57.172 fused_ordering(699) 00:13:57.172 fused_ordering(700) 00:13:57.172 fused_ordering(701) 00:13:57.172 fused_ordering(702) 00:13:57.172 fused_ordering(703) 00:13:57.172 fused_ordering(704) 00:13:57.172 fused_ordering(705) 00:13:57.172 fused_ordering(706) 00:13:57.172 fused_ordering(707) 00:13:57.172 fused_ordering(708) 00:13:57.172 fused_ordering(709) 00:13:57.172 fused_ordering(710) 00:13:57.172 fused_ordering(711) 00:13:57.172 fused_ordering(712) 00:13:57.172 fused_ordering(713) 00:13:57.172 fused_ordering(714) 00:13:57.172 fused_ordering(715) 00:13:57.172 fused_ordering(716) 00:13:57.172 fused_ordering(717) 00:13:57.172 fused_ordering(718) 00:13:57.172 fused_ordering(719) 00:13:57.172 fused_ordering(720) 00:13:57.172 fused_ordering(721) 00:13:57.172 fused_ordering(722) 00:13:57.172 fused_ordering(723) 00:13:57.172 fused_ordering(724) 00:13:57.172 fused_ordering(725) 00:13:57.172 fused_ordering(726) 00:13:57.172 fused_ordering(727) 00:13:57.172 fused_ordering(728) 00:13:57.172 fused_ordering(729) 00:13:57.172 fused_ordering(730) 00:13:57.172 fused_ordering(731) 00:13:57.172 fused_ordering(732) 00:13:57.172 fused_ordering(733) 00:13:57.172 fused_ordering(734) 00:13:57.172 fused_ordering(735) 00:13:57.172 fused_ordering(736) 00:13:57.172 fused_ordering(737) 00:13:57.172 fused_ordering(738) 00:13:57.172 fused_ordering(739) 00:13:57.172 fused_ordering(740) 00:13:57.172 fused_ordering(741) 00:13:57.172 fused_ordering(742) 00:13:57.172 fused_ordering(743) 00:13:57.172 fused_ordering(744) 00:13:57.172 fused_ordering(745) 00:13:57.172 fused_ordering(746) 00:13:57.172 fused_ordering(747) 00:13:57.172 fused_ordering(748) 00:13:57.172 fused_ordering(749) 00:13:57.172 fused_ordering(750) 00:13:57.172 fused_ordering(751) 00:13:57.172 fused_ordering(752) 00:13:57.172 fused_ordering(753) 00:13:57.172 fused_ordering(754) 00:13:57.172 fused_ordering(755) 00:13:57.172 fused_ordering(756) 00:13:57.172 fused_ordering(757) 00:13:57.172 fused_ordering(758) 00:13:57.172 fused_ordering(759) 00:13:57.172 fused_ordering(760) 00:13:57.172 fused_ordering(761) 00:13:57.172 fused_ordering(762) 00:13:57.172 fused_ordering(763) 00:13:57.172 fused_ordering(764) 00:13:57.172 fused_ordering(765) 00:13:57.172 fused_ordering(766) 00:13:57.172 fused_ordering(767) 00:13:57.172 fused_ordering(768) 00:13:57.172 fused_ordering(769) 00:13:57.172 fused_ordering(770) 00:13:57.172 fused_ordering(771) 00:13:57.172 fused_ordering(772) 00:13:57.172 fused_ordering(773) 00:13:57.172 fused_ordering(774) 00:13:57.172 fused_ordering(775) 00:13:57.172 fused_ordering(776) 00:13:57.172 fused_ordering(777) 00:13:57.172 fused_ordering(778) 00:13:57.172 fused_ordering(779) 00:13:57.172 fused_ordering(780) 00:13:57.172 fused_ordering(781) 00:13:57.172 fused_ordering(782) 00:13:57.172 fused_ordering(783) 00:13:57.172 fused_ordering(784) 00:13:57.172 fused_ordering(785) 00:13:57.172 fused_ordering(786) 00:13:57.172 fused_ordering(787) 00:13:57.172 fused_ordering(788) 00:13:57.172 fused_ordering(789) 00:13:57.172 fused_ordering(790) 00:13:57.172 fused_ordering(791) 00:13:57.172 fused_ordering(792) 00:13:57.172 fused_ordering(793) 00:13:57.172 fused_ordering(794) 00:13:57.172 fused_ordering(795) 00:13:57.172 fused_ordering(796) 00:13:57.172 fused_ordering(797) 00:13:57.172 fused_ordering(798) 00:13:57.172 fused_ordering(799) 00:13:57.172 fused_ordering(800) 00:13:57.172 fused_ordering(801) 00:13:57.172 fused_ordering(802) 00:13:57.172 fused_ordering(803) 00:13:57.172 fused_ordering(804) 00:13:57.172 fused_ordering(805) 00:13:57.172 fused_ordering(806) 00:13:57.172 fused_ordering(807) 00:13:57.172 fused_ordering(808) 00:13:57.172 fused_ordering(809) 00:13:57.172 fused_ordering(810) 00:13:57.172 fused_ordering(811) 00:13:57.172 fused_ordering(812) 00:13:57.172 fused_ordering(813) 00:13:57.172 fused_ordering(814) 00:13:57.172 fused_ordering(815) 00:13:57.172 fused_ordering(816) 00:13:57.172 fused_ordering(817) 00:13:57.172 fused_ordering(818) 00:13:57.172 fused_ordering(819) 00:13:57.172 fused_ordering(820) 00:13:58.106 fused_ordering(821) 00:13:58.106 fused_ordering(822) 00:13:58.106 fused_ordering(823) 00:13:58.106 fused_ordering(824) 00:13:58.106 fused_ordering(825) 00:13:58.106 fused_ordering(826) 00:13:58.106 fused_ordering(827) 00:13:58.106 fused_ordering(828) 00:13:58.106 fused_ordering(829) 00:13:58.106 fused_ordering(830) 00:13:58.106 fused_ordering(831) 00:13:58.106 fused_ordering(832) 00:13:58.106 fused_ordering(833) 00:13:58.106 fused_ordering(834) 00:13:58.106 fused_ordering(835) 00:13:58.106 fused_ordering(836) 00:13:58.106 fused_ordering(837) 00:13:58.106 fused_ordering(838) 00:13:58.106 fused_ordering(839) 00:13:58.106 fused_ordering(840) 00:13:58.106 fused_ordering(841) 00:13:58.106 fused_ordering(842) 00:13:58.106 fused_ordering(843) 00:13:58.106 fused_ordering(844) 00:13:58.106 fused_ordering(845) 00:13:58.106 fused_ordering(846) 00:13:58.106 fused_ordering(847) 00:13:58.106 fused_ordering(848) 00:13:58.106 fused_ordering(849) 00:13:58.106 fused_ordering(850) 00:13:58.106 fused_ordering(851) 00:13:58.106 fused_ordering(852) 00:13:58.106 fused_ordering(853) 00:13:58.106 fused_ordering(854) 00:13:58.106 fused_ordering(855) 00:13:58.106 fused_ordering(856) 00:13:58.106 fused_ordering(857) 00:13:58.106 fused_ordering(858) 00:13:58.106 fused_ordering(859) 00:13:58.106 fused_ordering(860) 00:13:58.106 fused_ordering(861) 00:13:58.106 fused_ordering(862) 00:13:58.106 fused_ordering(863) 00:13:58.106 fused_ordering(864) 00:13:58.106 fused_ordering(865) 00:13:58.106 fused_ordering(866) 00:13:58.106 fused_ordering(867) 00:13:58.106 fused_ordering(868) 00:13:58.106 fused_ordering(869) 00:13:58.106 fused_ordering(870) 00:13:58.106 fused_ordering(871) 00:13:58.106 fused_ordering(872) 00:13:58.106 fused_ordering(873) 00:13:58.106 fused_ordering(874) 00:13:58.106 fused_ordering(875) 00:13:58.106 fused_ordering(876) 00:13:58.106 fused_ordering(877) 00:13:58.106 fused_ordering(878) 00:13:58.106 fused_ordering(879) 00:13:58.106 fused_ordering(880) 00:13:58.106 fused_ordering(881) 00:13:58.106 fused_ordering(882) 00:13:58.106 fused_ordering(883) 00:13:58.106 fused_ordering(884) 00:13:58.106 fused_ordering(885) 00:13:58.106 fused_ordering(886) 00:13:58.106 fused_ordering(887) 00:13:58.106 fused_ordering(888) 00:13:58.106 fused_ordering(889) 00:13:58.106 fused_ordering(890) 00:13:58.106 fused_ordering(891) 00:13:58.106 fused_ordering(892) 00:13:58.106 fused_ordering(893) 00:13:58.106 fused_ordering(894) 00:13:58.106 fused_ordering(895) 00:13:58.106 fused_ordering(896) 00:13:58.106 fused_ordering(897) 00:13:58.106 fused_ordering(898) 00:13:58.106 fused_ordering(899) 00:13:58.107 fused_ordering(900) 00:13:58.107 fused_ordering(901) 00:13:58.107 fused_ordering(902) 00:13:58.107 fused_ordering(903) 00:13:58.107 fused_ordering(904) 00:13:58.107 fused_ordering(905) 00:13:58.107 fused_ordering(906) 00:13:58.107 fused_ordering(907) 00:13:58.107 fused_ordering(908) 00:13:58.107 fused_ordering(909) 00:13:58.107 fused_ordering(910) 00:13:58.107 fused_ordering(911) 00:13:58.107 fused_ordering(912) 00:13:58.107 fused_ordering(913) 00:13:58.107 fused_ordering(914) 00:13:58.107 fused_ordering(915) 00:13:58.107 fused_ordering(916) 00:13:58.107 fused_ordering(917) 00:13:58.107 fused_ordering(918) 00:13:58.107 fused_ordering(919) 00:13:58.107 fused_ordering(920) 00:13:58.107 fused_ordering(921) 00:13:58.107 fused_ordering(922) 00:13:58.107 fused_ordering(923) 00:13:58.107 fused_ordering(924) 00:13:58.107 fused_ordering(925) 00:13:58.107 fused_ordering(926) 00:13:58.107 fused_ordering(927) 00:13:58.107 fused_ordering(928) 00:13:58.107 fused_ordering(929) 00:13:58.107 fused_ordering(930) 00:13:58.107 fused_ordering(931) 00:13:58.107 fused_ordering(932) 00:13:58.107 fused_ordering(933) 00:13:58.107 fused_ordering(934) 00:13:58.107 fused_ordering(935) 00:13:58.107 fused_ordering(936) 00:13:58.107 fused_ordering(937) 00:13:58.107 fused_ordering(938) 00:13:58.107 fused_ordering(939) 00:13:58.107 fused_ordering(940) 00:13:58.107 fused_ordering(941) 00:13:58.107 fused_ordering(942) 00:13:58.107 fused_ordering(943) 00:13:58.107 fused_ordering(944) 00:13:58.107 fused_ordering(945) 00:13:58.107 fused_ordering(946) 00:13:58.107 fused_ordering(947) 00:13:58.107 fused_ordering(948) 00:13:58.107 fused_ordering(949) 00:13:58.107 fused_ordering(950) 00:13:58.107 fused_ordering(951) 00:13:58.107 fused_ordering(952) 00:13:58.107 fused_ordering(953) 00:13:58.107 fused_ordering(954) 00:13:58.107 fused_ordering(955) 00:13:58.107 fused_ordering(956) 00:13:58.107 fused_ordering(957) 00:13:58.107 fused_ordering(958) 00:13:58.107 fused_ordering(959) 00:13:58.107 fused_ordering(960) 00:13:58.107 fused_ordering(961) 00:13:58.107 fused_ordering(962) 00:13:58.107 fused_ordering(963) 00:13:58.107 fused_ordering(964) 00:13:58.107 fused_ordering(965) 00:13:58.107 fused_ordering(966) 00:13:58.107 fused_ordering(967) 00:13:58.107 fused_ordering(968) 00:13:58.107 fused_ordering(969) 00:13:58.107 fused_ordering(970) 00:13:58.107 fused_ordering(971) 00:13:58.107 fused_ordering(972) 00:13:58.107 fused_ordering(973) 00:13:58.107 fused_ordering(974) 00:13:58.107 fused_ordering(975) 00:13:58.107 fused_ordering(976) 00:13:58.107 fused_ordering(977) 00:13:58.107 fused_ordering(978) 00:13:58.107 fused_ordering(979) 00:13:58.107 fused_ordering(980) 00:13:58.107 fused_ordering(981) 00:13:58.107 fused_ordering(982) 00:13:58.107 fused_ordering(983) 00:13:58.107 fused_ordering(984) 00:13:58.107 fused_ordering(985) 00:13:58.107 fused_ordering(986) 00:13:58.107 fused_ordering(987) 00:13:58.107 fused_ordering(988) 00:13:58.107 fused_ordering(989) 00:13:58.107 fused_ordering(990) 00:13:58.107 fused_ordering(991) 00:13:58.107 fused_ordering(992) 00:13:58.107 fused_ordering(993) 00:13:58.107 fused_ordering(994) 00:13:58.107 fused_ordering(995) 00:13:58.107 fused_ordering(996) 00:13:58.107 fused_ordering(997) 00:13:58.107 fused_ordering(998) 00:13:58.107 fused_ordering(999) 00:13:58.107 fused_ordering(1000) 00:13:58.107 fused_ordering(1001) 00:13:58.107 fused_ordering(1002) 00:13:58.107 fused_ordering(1003) 00:13:58.107 fused_ordering(1004) 00:13:58.107 fused_ordering(1005) 00:13:58.107 fused_ordering(1006) 00:13:58.107 fused_ordering(1007) 00:13:58.107 fused_ordering(1008) 00:13:58.107 fused_ordering(1009) 00:13:58.107 fused_ordering(1010) 00:13:58.107 fused_ordering(1011) 00:13:58.107 fused_ordering(1012) 00:13:58.107 fused_ordering(1013) 00:13:58.107 fused_ordering(1014) 00:13:58.107 fused_ordering(1015) 00:13:58.107 fused_ordering(1016) 00:13:58.107 fused_ordering(1017) 00:13:58.107 fused_ordering(1018) 00:13:58.107 fused_ordering(1019) 00:13:58.107 fused_ordering(1020) 00:13:58.107 fused_ordering(1021) 00:13:58.107 fused_ordering(1022) 00:13:58.107 fused_ordering(1023) 00:13:58.107 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:58.107 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:58.107 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:58.107 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:13:58.107 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:58.107 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:13:58.107 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:58.107 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:58.107 rmmod nvme_tcp 00:13:58.107 rmmod nvme_fabrics 00:13:58.107 rmmod nvme_keyring 00:13:58.107 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:58.107 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:13:58.107 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:13:58.107 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 872239 ']' 00:13:58.107 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 872239 00:13:58.107 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 872239 ']' 00:13:58.107 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 872239 00:13:58.107 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:13:58.107 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:58.107 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 872239 00:13:58.107 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:58.107 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:58.107 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 872239' 00:13:58.107 killing process with pid 872239 00:13:58.107 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 872239 00:13:58.107 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 872239 00:13:58.676 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:58.676 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:58.676 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:58.676 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:58.676 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:58.676 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.676 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:58.676 19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.579 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:00.579 00:14:00.579 real 0m9.083s 00:14:00.579 user 0m6.250s 00:14:00.579 sys 0m4.470s 00:14:00.579 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:00.579 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:00.579 ************************************ 00:14:00.579 END TEST nvmf_fused_ordering 00:14:00.579 ************************************ 00:14:00.579 19:04:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:00.579 19:04:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:00.579 19:04:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:00.579 19:04:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:00.579 ************************************ 00:14:00.579 START TEST nvmf_ns_masking 00:14:00.580 ************************************ 00:14:00.580 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:00.580 * Looking for test storage... 00:14:00.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=142468ad-8f64-47e6-be25-3bdc9a2fba57 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=60dbb6cd-9fa9-4e77-ba78-3f86e0566d92 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=e2fe333e-80d3-41cd-9e99-b58410c8ac6e 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:00.580 19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:03.147 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:03.147 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:03.147 Found net devices under 0000:09:00.0: cvl_0_0 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:03.147 Found net devices under 0000:09:00.1: cvl_0_1 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:03.147 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:03.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:03.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:14:03.148 00:14:03.148 --- 10.0.0.2 ping statistics --- 00:14:03.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.148 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:03.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:03.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:14:03.148 00:14:03.148 --- 10.0.0.1 ping statistics --- 00:14:03.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.148 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=874891 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 874891 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 874891 ']' 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:03.148 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:03.406 [2024-07-25 19:04:55.651793] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:03.406 [2024-07-25 19:04:55.651886] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.406 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.406 [2024-07-25 19:04:55.734226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.406 [2024-07-25 19:04:55.855558] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.406 [2024-07-25 19:04:55.855614] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.406 [2024-07-25 19:04:55.855630] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.406 [2024-07-25 19:04:55.855643] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.406 [2024-07-25 19:04:55.855654] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.406 [2024-07-25 19:04:55.855682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.340 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:04.340 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:04.340 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:04.340 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:04.340 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:04.340 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.340 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:04.597 [2024-07-25 19:04:56.916925] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.597 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:04.597 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:04.597 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:04.855 Malloc1 00:14:04.855 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:05.113 Malloc2 00:14:05.113 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:05.370 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:05.628 19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:05.885 [2024-07-25 19:04:58.233027] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:05.885 19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:05.885 19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e2fe333e-80d3-41cd-9e99-b58410c8ac6e -a 10.0.0.2 -s 4420 -i 4 00:14:06.142 19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:06.142 19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:06.142 19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:06.142 19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:06.142 19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:08.039 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:08.039 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:08.039 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:08.039 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:08.039 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:08.039 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:08.039 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:08.039 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:08.039 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:08.039 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:08.039 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:08.039 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.039 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:08.039 [ 0]:0x1 00:14:08.039 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:08.039 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.039 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e56c18d105024570805f8c5faac51142 00:14:08.039 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e56c18d105024570805f8c5faac51142 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.039 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:08.297 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:08.297 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.297 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:08.297 [ 0]:0x1 00:14:08.297 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:08.297 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.555 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e56c18d105024570805f8c5faac51142 00:14:08.555 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e56c18d105024570805f8c5faac51142 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.555 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:08.555 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.555 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:08.555 [ 1]:0x2 00:14:08.555 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:08.555 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.555 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=beb0490f245848c9889fc7552d315ddd 00:14:08.555 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ beb0490f245848c9889fc7552d315ddd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.555 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:08.555 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:08.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.813 19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.071 19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:09.329 19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:09.329 19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e2fe333e-80d3-41cd-9e99-b58410c8ac6e -a 10.0.0.2 -s 4420 -i 4 00:14:09.329 19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:09.329 19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:09.329 19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:09.329 19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:09.329 19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:09.329 19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:11.857 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:11.857 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:11.857 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:11.857 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:11.857 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:11.857 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:11.857 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:11.857 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:11.857 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:11.858 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:11.858 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:11.858 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:11.858 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:11.858 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:11.858 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.858 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:11.858 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.858 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:11.858 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.858 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:11.858 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:11.858 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.858 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:11.858 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.858 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:11.858 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:11.858 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:11.858 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:11.858 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:11.858 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.858 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:11.858 [ 0]:0x2 00:14:11.858 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:11.858 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.858 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=beb0490f245848c9889fc7552d315ddd 00:14:11.858 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ beb0490f245848c9889fc7552d315ddd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.858 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:11.858 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:11.858 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.858 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:11.858 [ 0]:0x1 00:14:11.858 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:11.858 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:12.116 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e56c18d105024570805f8c5faac51142 00:14:12.116 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e56c18d105024570805f8c5faac51142 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:12.116 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:12.116 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:12.116 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:12.116 [ 1]:0x2 00:14:12.116 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:12.116 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:12.116 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=beb0490f245848c9889fc7552d315ddd 00:14:12.116 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ beb0490f245848c9889fc7552d315ddd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:12.116 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:12.375 [ 0]:0x2 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=beb0490f245848c9889fc7552d315ddd 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ beb0490f245848c9889fc7552d315ddd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:12.375 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:12.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.633 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:12.891 19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:12.891 19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e2fe333e-80d3-41cd-9e99-b58410c8ac6e -a 10.0.0.2 -s 4420 -i 4 00:14:13.149 19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:13.149 19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:13.149 19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:13.149 19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:13.149 19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:13.149 19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:15.049 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:15.049 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:15.049 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:15.049 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:15.049 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:15.049 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:15.049 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:15.049 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:15.049 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:15.049 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:15.049 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:15.049 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.049 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:15.049 [ 0]:0x1 00:14:15.049 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:15.049 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.049 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e56c18d105024570805f8c5faac51142 00:14:15.049 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e56c18d105024570805f8c5faac51142 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.049 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:15.049 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.049 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:15.049 [ 1]:0x2 00:14:15.049 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:15.049 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.307 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=beb0490f245848c9889fc7552d315ddd 00:14:15.307 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ beb0490f245848c9889fc7552d315ddd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.307 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:15.307 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:15.307 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:15.307 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:15.307 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:15.307 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.307 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:15.307 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.307 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:15.307 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.308 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:15.308 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:15.308 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.565 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:15.565 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.565 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:15.565 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:15.565 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:15.565 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:15.565 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:15.565 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.565 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:15.565 [ 0]:0x2 00:14:15.565 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:15.565 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.565 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=beb0490f245848c9889fc7552d315ddd 00:14:15.565 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ beb0490f245848c9889fc7552d315ddd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.565 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:15.565 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:15.565 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:15.565 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.565 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.565 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.565 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.565 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.565 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.565 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.565 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:15.565 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:15.824 [2024-07-25 19:05:08.126775] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:15.824 request: 00:14:15.824 { 00:14:15.824 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:15.824 "nsid": 2, 00:14:15.824 "host": "nqn.2016-06.io.spdk:host1", 00:14:15.824 "method": "nvmf_ns_remove_host", 00:14:15.824 "req_id": 1 00:14:15.824 } 00:14:15.824 Got JSON-RPC error response 00:14:15.824 response: 00:14:15.824 { 00:14:15.824 "code": -32602, 00:14:15.824 "message": "Invalid parameters" 00:14:15.824 } 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:15.824 [ 0]:0x2 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=beb0490f245848c9889fc7552d315ddd 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ beb0490f245848c9889fc7552d315ddd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:15.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=876633 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 876633 /var/tmp/host.sock 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 876633 ']' 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:15.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:15.824 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:16.081 [2024-07-25 19:05:08.336322] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:16.081 [2024-07-25 19:05:08.336424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid876633 ] 00:14:16.081 EAL: No free 2048 kB hugepages reported on node 1 00:14:16.081 [2024-07-25 19:05:08.408313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.081 [2024-07-25 19:05:08.526860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.014 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:17.014 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:17.014 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.272 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:17.530 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 142468ad-8f64-47e6-be25-3bdc9a2fba57 00:14:17.530 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:17.530 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 142468AD8F6447E6BE253BDC9A2FBA57 -i 00:14:17.787 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 60dbb6cd-9fa9-4e77-ba78-3f86e0566d92 00:14:17.787 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:17.787 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 60DBB6CD9FA94E77BA783F86E0566D92 -i 00:14:18.045 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:18.302 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:18.560 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:18.560 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:18.818 nvme0n1 00:14:18.818 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:18.818 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:19.384 nvme1n2 00:14:19.384 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:19.384 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:19.384 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:19.384 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:19.384 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:19.662 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:19.662 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:19.662 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:19.662 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:19.920 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 142468ad-8f64-47e6-be25-3bdc9a2fba57 == \1\4\2\4\6\8\a\d\-\8\f\6\4\-\4\7\e\6\-\b\e\2\5\-\3\b\d\c\9\a\2\f\b\a\5\7 ]] 00:14:19.920 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:19.920 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:19.921 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:20.179 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 60dbb6cd-9fa9-4e77-ba78-3f86e0566d92 == \6\0\d\b\b\6\c\d\-\9\f\a\9\-\4\e\7\7\-\b\a\7\8\-\3\f\8\6\e\0\5\6\6\d\9\2 ]] 00:14:20.179 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 876633 00:14:20.179 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 876633 ']' 00:14:20.179 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 876633 00:14:20.179 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:20.179 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:20.179 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 876633 00:14:20.179 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:20.179 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:20.179 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 876633' 00:14:20.179 killing process with pid 876633 00:14:20.179 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 876633 00:14:20.179 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 876633 00:14:20.746 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:20.746 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:20.746 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:20.746 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:20.746 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:20.746 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:20.746 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:20.746 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:20.746 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:20.746 rmmod nvme_tcp 00:14:20.746 rmmod nvme_fabrics 00:14:21.005 rmmod nvme_keyring 00:14:21.005 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:21.005 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:21.005 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:21.005 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 874891 ']' 00:14:21.005 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 874891 00:14:21.005 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 874891 ']' 00:14:21.005 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 874891 00:14:21.005 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:21.005 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:21.005 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 874891 00:14:21.005 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:21.005 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:21.005 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 874891' 00:14:21.005 killing process with pid 874891 00:14:21.005 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 874891 00:14:21.005 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 874891 00:14:21.265 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:21.265 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:21.265 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:21.265 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:21.265 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:21.265 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.265 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:21.265 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.199 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:23.199 00:14:23.199 real 0m22.689s 00:14:23.199 user 0m29.563s 00:14:23.199 sys 0m4.598s 00:14:23.199 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:23.199 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:23.199 ************************************ 00:14:23.199 END TEST nvmf_ns_masking 00:14:23.199 ************************************ 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:23.459 ************************************ 00:14:23.459 START TEST nvmf_nvme_cli 00:14:23.459 ************************************ 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:23.459 * Looking for test storage... 00:14:23.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:23.459 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:25.991 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:25.991 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:25.991 Found net devices under 0000:09:00.0: cvl_0_0 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:25.991 Found net devices under 0000:09:00.1: cvl_0_1 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:25.991 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:25.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:25.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:14:25.992 00:14:25.992 --- 10.0.0.2 ping statistics --- 00:14:25.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.992 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:25.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:25.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:14:25.992 00:14:25.992 --- 10.0.0.1 ping statistics --- 00:14:25.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.992 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=879428 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 879428 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 879428 ']' 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:25.992 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.992 [2024-07-25 19:05:18.392135] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:25.992 [2024-07-25 19:05:18.392223] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.992 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.251 [2024-07-25 19:05:18.471628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:26.251 [2024-07-25 19:05:18.596920] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.251 [2024-07-25 19:05:18.596968] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.251 [2024-07-25 19:05:18.596984] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.251 [2024-07-25 19:05:18.596997] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.251 [2024-07-25 19:05:18.597008] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.251 [2024-07-25 19:05:18.597094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.251 [2024-07-25 19:05:18.597126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.251 [2024-07-25 19:05:18.597168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:26.251 [2024-07-25 19:05:18.597168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.185 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:27.185 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:27.185 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:27.185 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:27.185 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:27.185 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.185 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:27.185 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.185 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:27.185 [2024-07-25 19:05:19.354587] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.185 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.185 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:27.185 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.185 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:27.185 Malloc0 00:14:27.185 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.185 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:27.185 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.185 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:27.185 Malloc1 00:14:27.185 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.185 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:27.185 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.185 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:27.186 [2024-07-25 19:05:19.436163] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:14:27.186 00:14:27.186 Discovery Log Number of Records 2, Generation counter 2 00:14:27.186 =====Discovery Log Entry 0====== 00:14:27.186 trtype: tcp 00:14:27.186 adrfam: ipv4 00:14:27.186 subtype: current discovery subsystem 00:14:27.186 treq: not required 00:14:27.186 portid: 0 00:14:27.186 trsvcid: 4420 00:14:27.186 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:27.186 traddr: 10.0.0.2 00:14:27.186 eflags: explicit discovery connections, duplicate discovery information 00:14:27.186 sectype: none 00:14:27.186 =====Discovery Log Entry 1====== 00:14:27.186 trtype: tcp 00:14:27.186 adrfam: ipv4 00:14:27.186 subtype: nvme subsystem 00:14:27.186 treq: not required 00:14:27.186 portid: 0 00:14:27.186 trsvcid: 4420 00:14:27.186 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:27.186 traddr: 10.0.0.2 00:14:27.186 eflags: none 00:14:27.186 sectype: none 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:27.186 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:27.752 19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:27.752 19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:27.752 19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:27.752 19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:27.752 19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:27.752 19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:30.278 /dev/nvme0n1 ]] 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:30.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:30.278 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:30.536 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:30.536 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:30.536 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:30.536 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:30.536 rmmod nvme_tcp 00:14:30.536 rmmod nvme_fabrics 00:14:30.536 rmmod nvme_keyring 00:14:30.536 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:30.536 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:30.536 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:30.536 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 879428 ']' 00:14:30.536 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 879428 00:14:30.536 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 879428 ']' 00:14:30.536 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 879428 00:14:30.536 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:30.536 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:30.536 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 879428 00:14:30.536 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:30.536 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:30.536 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 879428' 00:14:30.536 killing process with pid 879428 00:14:30.536 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 879428 00:14:30.536 19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 879428 00:14:30.793 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:30.793 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:30.793 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:30.793 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:30.793 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:30.793 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.793 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:30.793 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:33.328 00:14:33.328 real 0m9.545s 00:14:33.328 user 0m18.810s 00:14:33.328 sys 0m2.608s 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:33.328 ************************************ 00:14:33.328 END TEST nvmf_nvme_cli 00:14:33.328 ************************************ 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:33.328 ************************************ 00:14:33.328 START TEST nvmf_vfio_user 00:14:33.328 ************************************ 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:33.328 * Looking for test storage... 00:14:33.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:33.328 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=880474 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 880474' 00:14:33.329 Process pid: 880474 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 880474 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 880474 ']' 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:33.329 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:33.329 [2024-07-25 19:05:25.399642] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:33.329 [2024-07-25 19:05:25.399727] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.329 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.329 [2024-07-25 19:05:25.471930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:33.329 [2024-07-25 19:05:25.595235] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.329 [2024-07-25 19:05:25.595296] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.329 [2024-07-25 19:05:25.595313] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:33.329 [2024-07-25 19:05:25.595327] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:33.329 [2024-07-25 19:05:25.595338] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.329 [2024-07-25 19:05:25.595405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.329 [2024-07-25 19:05:25.595457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.329 [2024-07-25 19:05:25.595506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:33.329 [2024-07-25 19:05:25.595509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.262 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:34.262 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:34.262 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:35.195 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:35.453 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:35.453 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:35.453 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:35.453 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:35.453 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:35.711 Malloc1 00:14:35.711 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:35.969 19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:36.226 19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:36.484 19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:36.484 19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:36.484 19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:36.742 Malloc2 00:14:36.742 19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:37.000 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:37.258 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:37.518 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:37.518 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:37.518 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:37.518 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:37.518 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:37.518 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:37.518 [2024-07-25 19:05:29.750866] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:37.518 [2024-07-25 19:05:29.750909] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid881017 ] 00:14:37.518 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.518 [2024-07-25 19:05:29.785508] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:37.518 [2024-07-25 19:05:29.787963] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:37.518 [2024-07-25 19:05:29.788005] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc451726000 00:14:37.518 [2024-07-25 19:05:29.788957] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:37.518 [2024-07-25 19:05:29.789957] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:37.518 [2024-07-25 19:05:29.790962] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:37.518 [2024-07-25 19:05:29.791968] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:37.518 [2024-07-25 19:05:29.792968] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:37.518 [2024-07-25 19:05:29.793973] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:37.518 [2024-07-25 19:05:29.794981] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:37.518 [2024-07-25 19:05:29.795984] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:37.518 [2024-07-25 19:05:29.796992] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:37.518 [2024-07-25 19:05:29.797011] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc45171b000 00:14:37.518 [2024-07-25 19:05:29.798147] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:37.518 [2024-07-25 19:05:29.812695] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:37.518 [2024-07-25 19:05:29.812726] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:37.518 [2024-07-25 19:05:29.818137] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:37.518 [2024-07-25 19:05:29.818193] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:37.518 [2024-07-25 19:05:29.818287] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:37.518 [2024-07-25 19:05:29.818313] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:37.518 [2024-07-25 19:05:29.818324] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:37.518 [2024-07-25 19:05:29.819130] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:37.518 [2024-07-25 19:05:29.819154] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:37.518 [2024-07-25 19:05:29.819168] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:37.518 [2024-07-25 19:05:29.820129] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:37.518 [2024-07-25 19:05:29.820147] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:37.518 [2024-07-25 19:05:29.820160] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:37.518 [2024-07-25 19:05:29.821141] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:37.518 [2024-07-25 19:05:29.821160] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:37.518 [2024-07-25 19:05:29.822132] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:37.518 [2024-07-25 19:05:29.822150] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:37.518 [2024-07-25 19:05:29.822158] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:37.518 [2024-07-25 19:05:29.822169] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:37.518 [2024-07-25 19:05:29.822279] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:37.518 [2024-07-25 19:05:29.822287] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:37.518 [2024-07-25 19:05:29.822296] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:37.518 [2024-07-25 19:05:29.824113] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:37.518 [2024-07-25 19:05:29.825154] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:37.518 [2024-07-25 19:05:29.826164] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:37.518 [2024-07-25 19:05:29.827159] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:37.518 [2024-07-25 19:05:29.827288] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:37.518 [2024-07-25 19:05:29.828174] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:37.518 [2024-07-25 19:05:29.828193] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:37.518 [2024-07-25 19:05:29.828202] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:37.518 [2024-07-25 19:05:29.828226] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:37.518 [2024-07-25 19:05:29.828240] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:37.518 [2024-07-25 19:05:29.828265] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:37.518 [2024-07-25 19:05:29.828274] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:37.518 [2024-07-25 19:05:29.828281] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:37.518 [2024-07-25 19:05:29.828299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:37.518 [2024-07-25 19:05:29.828361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:37.518 [2024-07-25 19:05:29.828377] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:37.518 [2024-07-25 19:05:29.828400] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:37.519 [2024-07-25 19:05:29.828407] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:37.519 [2024-07-25 19:05:29.828415] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:37.519 [2024-07-25 19:05:29.828423] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:37.519 [2024-07-25 19:05:29.828430] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:37.519 [2024-07-25 19:05:29.828438] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:37.519 [2024-07-25 19:05:29.828465] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:37.519 [2024-07-25 19:05:29.828484] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:37.519 [2024-07-25 19:05:29.828504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:37.519 [2024-07-25 19:05:29.828524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.519 [2024-07-25 19:05:29.828543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.519 [2024-07-25 19:05:29.828555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.519 [2024-07-25 19:05:29.828567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.519 [2024-07-25 19:05:29.828575] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:37.519 [2024-07-25 19:05:29.828590] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:37.519 [2024-07-25 19:05:29.828604] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:37.519 [2024-07-25 19:05:29.828616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:37.519 [2024-07-25 19:05:29.828626] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:37.519 [2024-07-25 19:05:29.828633] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:37.519 [2024-07-25 19:05:29.828647] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:37.519 [2024-07-25 19:05:29.828658] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:37.519 [2024-07-25 19:05:29.828670] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:37.519 [2024-07-25 19:05:29.828682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:37.519 [2024-07-25 19:05:29.828745] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:37.519 [2024-07-25 19:05:29.828760] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:37.519 [2024-07-25 19:05:29.828773] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:37.519 [2024-07-25 19:05:29.828780] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:37.519 [2024-07-25 19:05:29.828786] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:37.519 [2024-07-25 19:05:29.828795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:37.519 [2024-07-25 19:05:29.828811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:37.519 [2024-07-25 19:05:29.828832] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:37.519 [2024-07-25 19:05:29.828847] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:37.519 [2024-07-25 19:05:29.828860] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:37.519 [2024-07-25 19:05:29.828872] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:37.519 [2024-07-25 19:05:29.828879] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:37.519 [2024-07-25 19:05:29.828889] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:37.519 [2024-07-25 19:05:29.828898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:37.519 [2024-07-25 19:05:29.828925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:37.519 [2024-07-25 19:05:29.828945] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:37.519 [2024-07-25 19:05:29.828959] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:37.519 [2024-07-25 19:05:29.828971] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:37.519 [2024-07-25 19:05:29.828978] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:37.519 [2024-07-25 19:05:29.828984] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:37.519 [2024-07-25 19:05:29.828993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:37.519 [2024-07-25 19:05:29.829006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:37.519 [2024-07-25 19:05:29.829019] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:37.519 [2024-07-25 19:05:29.829030] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:37.519 [2024-07-25 19:05:29.829043] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:37.519 [2024-07-25 19:05:29.829056] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:37.519 [2024-07-25 19:05:29.829064] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:37.519 [2024-07-25 19:05:29.829072] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:37.519 [2024-07-25 19:05:29.829080] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:37.519 [2024-07-25 19:05:29.829110] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:37.519 [2024-07-25 19:05:29.829120] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:37.519 [2024-07-25 19:05:29.829148] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:37.519 [2024-07-25 19:05:29.829166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:37.519 [2024-07-25 19:05:29.829186] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:37.519 [2024-07-25 19:05:29.829199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:37.519 [2024-07-25 19:05:29.829215] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:37.519 [2024-07-25 19:05:29.829227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:37.519 [2024-07-25 19:05:29.829244] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:37.519 [2024-07-25 19:05:29.829260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:37.519 [2024-07-25 19:05:29.829284] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:37.519 [2024-07-25 19:05:29.829294] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:37.519 [2024-07-25 19:05:29.829301] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:37.519 [2024-07-25 19:05:29.829307] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:37.519 [2024-07-25 19:05:29.829313] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:37.519 [2024-07-25 19:05:29.829322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:37.519 [2024-07-25 19:05:29.829335] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:37.519 [2024-07-25 19:05:29.829343] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:37.520 [2024-07-25 19:05:29.829349] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:37.520 [2024-07-25 19:05:29.829358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:37.520 [2024-07-25 19:05:29.829369] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:37.520 [2024-07-25 19:05:29.829377] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:37.520 [2024-07-25 19:05:29.829398] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:37.520 [2024-07-25 19:05:29.829407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:37.520 [2024-07-25 19:05:29.829419] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:37.520 [2024-07-25 19:05:29.829427] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:37.520 [2024-07-25 19:05:29.829433] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:37.520 [2024-07-25 19:05:29.829441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:37.520 [2024-07-25 19:05:29.829468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:37.520 [2024-07-25 19:05:29.829487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:37.520 [2024-07-25 19:05:29.829506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:37.520 [2024-07-25 19:05:29.829517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:37.520 ===================================================== 00:14:37.520 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:37.520 ===================================================== 00:14:37.520 Controller Capabilities/Features 00:14:37.520 ================================ 00:14:37.520 Vendor ID: 4e58 00:14:37.520 Subsystem Vendor ID: 4e58 00:14:37.520 Serial Number: SPDK1 00:14:37.520 Model Number: SPDK bdev Controller 00:14:37.520 Firmware Version: 24.09 00:14:37.520 Recommended Arb Burst: 6 00:14:37.520 IEEE OUI Identifier: 8d 6b 50 00:14:37.520 Multi-path I/O 00:14:37.520 May have multiple subsystem ports: Yes 00:14:37.520 May have multiple controllers: Yes 00:14:37.520 Associated with SR-IOV VF: No 00:14:37.520 Max Data Transfer Size: 131072 00:14:37.520 Max Number of Namespaces: 32 00:14:37.520 Max Number of I/O Queues: 127 00:14:37.520 NVMe Specification Version (VS): 1.3 00:14:37.520 NVMe Specification Version (Identify): 1.3 00:14:37.520 Maximum Queue Entries: 256 00:14:37.520 Contiguous Queues Required: Yes 00:14:37.520 Arbitration Mechanisms Supported 00:14:37.520 Weighted Round Robin: Not Supported 00:14:37.520 Vendor Specific: Not Supported 00:14:37.520 Reset Timeout: 15000 ms 00:14:37.520 Doorbell Stride: 4 bytes 00:14:37.520 NVM Subsystem Reset: Not Supported 00:14:37.520 Command Sets Supported 00:14:37.520 NVM Command Set: Supported 00:14:37.520 Boot Partition: Not Supported 00:14:37.520 Memory Page Size Minimum: 4096 bytes 00:14:37.520 Memory Page Size Maximum: 4096 bytes 00:14:37.520 Persistent Memory Region: Not Supported 00:14:37.520 Optional Asynchronous Events Supported 00:14:37.520 Namespace Attribute Notices: Supported 00:14:37.520 Firmware Activation Notices: Not Supported 00:14:37.520 ANA Change Notices: Not Supported 00:14:37.520 PLE Aggregate Log Change Notices: Not Supported 00:14:37.520 LBA Status Info Alert Notices: Not Supported 00:14:37.520 EGE Aggregate Log Change Notices: Not Supported 00:14:37.520 Normal NVM Subsystem Shutdown event: Not Supported 00:14:37.520 Zone Descriptor Change Notices: Not Supported 00:14:37.520 Discovery Log Change Notices: Not Supported 00:14:37.520 Controller Attributes 00:14:37.520 128-bit Host Identifier: Supported 00:14:37.520 Non-Operational Permissive Mode: Not Supported 00:14:37.520 NVM Sets: Not Supported 00:14:37.520 Read Recovery Levels: Not Supported 00:14:37.520 Endurance Groups: Not Supported 00:14:37.520 Predictable Latency Mode: Not Supported 00:14:37.520 Traffic Based Keep ALive: Not Supported 00:14:37.520 Namespace Granularity: Not Supported 00:14:37.520 SQ Associations: Not Supported 00:14:37.520 UUID List: Not Supported 00:14:37.520 Multi-Domain Subsystem: Not Supported 00:14:37.520 Fixed Capacity Management: Not Supported 00:14:37.520 Variable Capacity Management: Not Supported 00:14:37.520 Delete Endurance Group: Not Supported 00:14:37.520 Delete NVM Set: Not Supported 00:14:37.520 Extended LBA Formats Supported: Not Supported 00:14:37.520 Flexible Data Placement Supported: Not Supported 00:14:37.520 00:14:37.520 Controller Memory Buffer Support 00:14:37.520 ================================ 00:14:37.520 Supported: No 00:14:37.520 00:14:37.520 Persistent Memory Region Support 00:14:37.520 ================================ 00:14:37.520 Supported: No 00:14:37.520 00:14:37.520 Admin Command Set Attributes 00:14:37.520 ============================ 00:14:37.520 Security Send/Receive: Not Supported 00:14:37.520 Format NVM: Not Supported 00:14:37.520 Firmware Activate/Download: Not Supported 00:14:37.520 Namespace Management: Not Supported 00:14:37.520 Device Self-Test: Not Supported 00:14:37.520 Directives: Not Supported 00:14:37.520 NVMe-MI: Not Supported 00:14:37.520 Virtualization Management: Not Supported 00:14:37.520 Doorbell Buffer Config: Not Supported 00:14:37.520 Get LBA Status Capability: Not Supported 00:14:37.520 Command & Feature Lockdown Capability: Not Supported 00:14:37.520 Abort Command Limit: 4 00:14:37.520 Async Event Request Limit: 4 00:14:37.520 Number of Firmware Slots: N/A 00:14:37.520 Firmware Slot 1 Read-Only: N/A 00:14:37.520 Firmware Activation Without Reset: N/A 00:14:37.520 Multiple Update Detection Support: N/A 00:14:37.520 Firmware Update Granularity: No Information Provided 00:14:37.520 Per-Namespace SMART Log: No 00:14:37.520 Asymmetric Namespace Access Log Page: Not Supported 00:14:37.520 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:37.520 Command Effects Log Page: Supported 00:14:37.520 Get Log Page Extended Data: Supported 00:14:37.520 Telemetry Log Pages: Not Supported 00:14:37.520 Persistent Event Log Pages: Not Supported 00:14:37.520 Supported Log Pages Log Page: May Support 00:14:37.520 Commands Supported & Effects Log Page: Not Supported 00:14:37.520 Feature Identifiers & Effects Log Page:May Support 00:14:37.520 NVMe-MI Commands & Effects Log Page: May Support 00:14:37.520 Data Area 4 for Telemetry Log: Not Supported 00:14:37.520 Error Log Page Entries Supported: 128 00:14:37.520 Keep Alive: Supported 00:14:37.520 Keep Alive Granularity: 10000 ms 00:14:37.520 00:14:37.520 NVM Command Set Attributes 00:14:37.520 ========================== 00:14:37.520 Submission Queue Entry Size 00:14:37.520 Max: 64 00:14:37.520 Min: 64 00:14:37.520 Completion Queue Entry Size 00:14:37.520 Max: 16 00:14:37.520 Min: 16 00:14:37.520 Number of Namespaces: 32 00:14:37.520 Compare Command: Supported 00:14:37.520 Write Uncorrectable Command: Not Supported 00:14:37.520 Dataset Management Command: Supported 00:14:37.520 Write Zeroes Command: Supported 00:14:37.520 Set Features Save Field: Not Supported 00:14:37.520 Reservations: Not Supported 00:14:37.520 Timestamp: Not Supported 00:14:37.520 Copy: Supported 00:14:37.520 Volatile Write Cache: Present 00:14:37.520 Atomic Write Unit (Normal): 1 00:14:37.520 Atomic Write Unit (PFail): 1 00:14:37.520 Atomic Compare & Write Unit: 1 00:14:37.520 Fused Compare & Write: Supported 00:14:37.520 Scatter-Gather List 00:14:37.520 SGL Command Set: Supported (Dword aligned) 00:14:37.520 SGL Keyed: Not Supported 00:14:37.520 SGL Bit Bucket Descriptor: Not Supported 00:14:37.520 SGL Metadata Pointer: Not Supported 00:14:37.520 Oversized SGL: Not Supported 00:14:37.520 SGL Metadata Address: Not Supported 00:14:37.520 SGL Offset: Not Supported 00:14:37.520 Transport SGL Data Block: Not Supported 00:14:37.520 Replay Protected Memory Block: Not Supported 00:14:37.520 00:14:37.520 Firmware Slot Information 00:14:37.520 ========================= 00:14:37.520 Active slot: 1 00:14:37.520 Slot 1 Firmware Revision: 24.09 00:14:37.520 00:14:37.520 00:14:37.520 Commands Supported and Effects 00:14:37.520 ============================== 00:14:37.520 Admin Commands 00:14:37.520 -------------- 00:14:37.520 Get Log Page (02h): Supported 00:14:37.520 Identify (06h): Supported 00:14:37.520 Abort (08h): Supported 00:14:37.520 Set Features (09h): Supported 00:14:37.520 Get Features (0Ah): Supported 00:14:37.521 Asynchronous Event Request (0Ch): Supported 00:14:37.521 Keep Alive (18h): Supported 00:14:37.521 I/O Commands 00:14:37.521 ------------ 00:14:37.521 Flush (00h): Supported LBA-Change 00:14:37.521 Write (01h): Supported LBA-Change 00:14:37.521 Read (02h): Supported 00:14:37.521 Compare (05h): Supported 00:14:37.521 Write Zeroes (08h): Supported LBA-Change 00:14:37.521 Dataset Management (09h): Supported LBA-Change 00:14:37.521 Copy (19h): Supported LBA-Change 00:14:37.521 00:14:37.521 Error Log 00:14:37.521 ========= 00:14:37.521 00:14:37.521 Arbitration 00:14:37.521 =========== 00:14:37.521 Arbitration Burst: 1 00:14:37.521 00:14:37.521 Power Management 00:14:37.521 ================ 00:14:37.521 Number of Power States: 1 00:14:37.521 Current Power State: Power State #0 00:14:37.521 Power State #0: 00:14:37.521 Max Power: 0.00 W 00:14:37.521 Non-Operational State: Operational 00:14:37.521 Entry Latency: Not Reported 00:14:37.521 Exit Latency: Not Reported 00:14:37.521 Relative Read Throughput: 0 00:14:37.521 Relative Read Latency: 0 00:14:37.521 Relative Write Throughput: 0 00:14:37.521 Relative Write Latency: 0 00:14:37.521 Idle Power: Not Reported 00:14:37.521 Active Power: Not Reported 00:14:37.521 Non-Operational Permissive Mode: Not Supported 00:14:37.521 00:14:37.521 Health Information 00:14:37.521 ================== 00:14:37.521 Critical Warnings: 00:14:37.521 Available Spare Space: OK 00:14:37.521 Temperature: OK 00:14:37.521 Device Reliability: OK 00:14:37.521 Read Only: No 00:14:37.521 Volatile Memory Backup: OK 00:14:37.521 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:37.521 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:37.521 Available Spare: 0% 00:14:37.521 Available Sp[2024-07-25 19:05:29.829638] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:37.521 [2024-07-25 19:05:29.829654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:37.521 [2024-07-25 19:05:29.829696] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:37.521 [2024-07-25 19:05:29.829714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.521 [2024-07-25 19:05:29.829724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.521 [2024-07-25 19:05:29.829737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.521 [2024-07-25 19:05:29.829747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.521 [2024-07-25 19:05:29.832115] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:37.521 [2024-07-25 19:05:29.832137] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:37.521 [2024-07-25 19:05:29.832202] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:37.521 [2024-07-25 19:05:29.832274] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:37.521 [2024-07-25 19:05:29.832287] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:37.521 [2024-07-25 19:05:29.833209] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:37.521 [2024-07-25 19:05:29.833232] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:37.521 [2024-07-25 19:05:29.833287] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:37.521 [2024-07-25 19:05:29.837112] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:37.521 are Threshold: 0% 00:14:37.521 Life Percentage Used: 0% 00:14:37.521 Data Units Read: 0 00:14:37.521 Data Units Written: 0 00:14:37.521 Host Read Commands: 0 00:14:37.521 Host Write Commands: 0 00:14:37.521 Controller Busy Time: 0 minutes 00:14:37.521 Power Cycles: 0 00:14:37.521 Power On Hours: 0 hours 00:14:37.521 Unsafe Shutdowns: 0 00:14:37.521 Unrecoverable Media Errors: 0 00:14:37.521 Lifetime Error Log Entries: 0 00:14:37.521 Warning Temperature Time: 0 minutes 00:14:37.521 Critical Temperature Time: 0 minutes 00:14:37.521 00:14:37.521 Number of Queues 00:14:37.521 ================ 00:14:37.521 Number of I/O Submission Queues: 127 00:14:37.521 Number of I/O Completion Queues: 127 00:14:37.521 00:14:37.521 Active Namespaces 00:14:37.521 ================= 00:14:37.521 Namespace ID:1 00:14:37.521 Error Recovery Timeout: Unlimited 00:14:37.521 Command Set Identifier: NVM (00h) 00:14:37.521 Deallocate: Supported 00:14:37.521 Deallocated/Unwritten Error: Not Supported 00:14:37.521 Deallocated Read Value: Unknown 00:14:37.521 Deallocate in Write Zeroes: Not Supported 00:14:37.521 Deallocated Guard Field: 0xFFFF 00:14:37.521 Flush: Supported 00:14:37.521 Reservation: Supported 00:14:37.521 Namespace Sharing Capabilities: Multiple Controllers 00:14:37.521 Size (in LBAs): 131072 (0GiB) 00:14:37.521 Capacity (in LBAs): 131072 (0GiB) 00:14:37.521 Utilization (in LBAs): 131072 (0GiB) 00:14:37.521 NGUID: AE6DA195A80B43CF99D62A6CC18A94B8 00:14:37.521 UUID: ae6da195-a80b-43cf-99d6-2a6cc18a94b8 00:14:37.521 Thin Provisioning: Not Supported 00:14:37.521 Per-NS Atomic Units: Yes 00:14:37.521 Atomic Boundary Size (Normal): 0 00:14:37.521 Atomic Boundary Size (PFail): 0 00:14:37.521 Atomic Boundary Offset: 0 00:14:37.521 Maximum Single Source Range Length: 65535 00:14:37.521 Maximum Copy Length: 65535 00:14:37.521 Maximum Source Range Count: 1 00:14:37.521 NGUID/EUI64 Never Reused: No 00:14:37.521 Namespace Write Protected: No 00:14:37.521 Number of LBA Formats: 1 00:14:37.521 Current LBA Format: LBA Format #00 00:14:37.521 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:37.521 00:14:37.521 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:37.521 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.780 [2024-07-25 19:05:30.077008] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:43.042 Initializing NVMe Controllers 00:14:43.042 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:43.042 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:43.042 Initialization complete. Launching workers. 00:14:43.042 ======================================================== 00:14:43.042 Latency(us) 00:14:43.042 Device Information : IOPS MiB/s Average min max 00:14:43.042 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33699.59 131.64 3800.03 1162.09 8560.60 00:14:43.042 ======================================================== 00:14:43.042 Total : 33699.59 131.64 3800.03 1162.09 8560.60 00:14:43.042 00:14:43.042 [2024-07-25 19:05:35.100304] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:43.042 19:05:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:43.042 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.042 [2024-07-25 19:05:35.345540] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:48.302 Initializing NVMe Controllers 00:14:48.302 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:48.302 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:48.302 Initialization complete. Launching workers. 00:14:48.302 ======================================================== 00:14:48.302 Latency(us) 00:14:48.302 Device Information : IOPS MiB/s Average min max 00:14:48.302 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15934.37 62.24 8041.32 4987.35 15976.12 00:14:48.302 ======================================================== 00:14:48.302 Total : 15934.37 62.24 8041.32 4987.35 15976.12 00:14:48.302 00:14:48.302 [2024-07-25 19:05:40.381742] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:48.302 19:05:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:48.302 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.302 [2024-07-25 19:05:40.603776] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:53.607 [2024-07-25 19:05:45.679492] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:53.607 Initializing NVMe Controllers 00:14:53.607 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:53.607 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:53.607 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:53.607 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:53.607 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:53.607 Initialization complete. Launching workers. 00:14:53.607 Starting thread on core 2 00:14:53.607 Starting thread on core 3 00:14:53.607 Starting thread on core 1 00:14:53.607 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:53.607 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.607 [2024-07-25 19:05:46.009558] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:56.893 [2024-07-25 19:05:49.076592] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:56.893 Initializing NVMe Controllers 00:14:56.893 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:56.893 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:56.893 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:56.893 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:56.893 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:56.893 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:56.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:56.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:56.893 Initialization complete. Launching workers. 00:14:56.893 Starting thread on core 1 with urgent priority queue 00:14:56.893 Starting thread on core 2 with urgent priority queue 00:14:56.893 Starting thread on core 3 with urgent priority queue 00:14:56.893 Starting thread on core 0 with urgent priority queue 00:14:56.893 SPDK bdev Controller (SPDK1 ) core 0: 4952.33 IO/s 20.19 secs/100000 ios 00:14:56.893 SPDK bdev Controller (SPDK1 ) core 1: 4959.33 IO/s 20.16 secs/100000 ios 00:14:56.893 SPDK bdev Controller (SPDK1 ) core 2: 5197.33 IO/s 19.24 secs/100000 ios 00:14:56.893 SPDK bdev Controller (SPDK1 ) core 3: 5659.33 IO/s 17.67 secs/100000 ios 00:14:56.893 ======================================================== 00:14:56.893 00:14:56.893 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:56.893 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.151 [2024-07-25 19:05:49.395592] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:57.151 Initializing NVMe Controllers 00:14:57.151 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:57.151 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:57.151 Namespace ID: 1 size: 0GB 00:14:57.151 Initialization complete. 00:14:57.151 INFO: using host memory buffer for IO 00:14:57.151 Hello world! 00:14:57.151 [2024-07-25 19:05:49.429131] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:57.151 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:57.151 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.409 [2024-07-25 19:05:49.733541] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:58.343 Initializing NVMe Controllers 00:14:58.343 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:58.343 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:58.343 Initialization complete. Launching workers. 00:14:58.343 submit (in ns) avg, min, max = 7324.4, 3552.2, 4001582.2 00:14:58.343 complete (in ns) avg, min, max = 28410.8, 2070.0, 4014733.3 00:14:58.343 00:14:58.343 Submit histogram 00:14:58.343 ================ 00:14:58.343 Range in us Cumulative Count 00:14:58.343 3.532 - 3.556: 0.0076% ( 1) 00:14:58.343 3.556 - 3.579: 0.5177% ( 67) 00:14:58.343 3.579 - 3.603: 4.3170% ( 499) 00:14:58.343 3.603 - 3.627: 10.7964% ( 851) 00:14:58.343 3.627 - 3.650: 20.8238% ( 1317) 00:14:58.343 3.650 - 3.674: 31.1025% ( 1350) 00:14:58.343 3.674 - 3.698: 39.5462% ( 1109) 00:14:58.343 3.698 - 3.721: 47.2286% ( 1009) 00:14:58.343 3.721 - 3.745: 52.5659% ( 701) 00:14:58.343 3.745 - 3.769: 57.6519% ( 668) 00:14:58.343 3.769 - 3.793: 61.7025% ( 532) 00:14:58.343 3.793 - 3.816: 65.1744% ( 456) 00:14:58.343 3.816 - 3.840: 68.0143% ( 373) 00:14:58.343 3.840 - 3.864: 71.9050% ( 511) 00:14:58.343 3.864 - 3.887: 76.2449% ( 570) 00:14:58.343 3.887 - 3.911: 80.5771% ( 569) 00:14:58.343 3.911 - 3.935: 84.0490% ( 456) 00:14:58.343 3.935 - 3.959: 86.3560% ( 303) 00:14:58.343 3.959 - 3.982: 88.3661% ( 264) 00:14:58.343 3.982 - 4.006: 90.2010% ( 241) 00:14:58.343 4.006 - 4.030: 91.4344% ( 162) 00:14:58.343 4.030 - 4.053: 92.5384% ( 145) 00:14:58.343 4.053 - 4.077: 93.4978% ( 126) 00:14:58.343 4.077 - 4.101: 94.2439% ( 98) 00:14:58.343 4.101 - 4.124: 95.0510% ( 106) 00:14:58.343 4.124 - 4.148: 95.5307% ( 63) 00:14:58.343 4.148 - 4.172: 95.8809% ( 46) 00:14:58.343 4.172 - 4.196: 96.1017% ( 29) 00:14:58.343 4.196 - 4.219: 96.3606% ( 34) 00:14:58.343 4.219 - 4.243: 96.4748% ( 15) 00:14:58.343 4.243 - 4.267: 96.5890% ( 15) 00:14:58.343 4.267 - 4.290: 96.7032% ( 15) 00:14:58.343 4.290 - 4.314: 96.8403% ( 18) 00:14:58.343 4.314 - 4.338: 96.8859% ( 6) 00:14:58.343 4.338 - 4.361: 96.9545% ( 9) 00:14:58.343 4.361 - 4.385: 97.0078% ( 7) 00:14:58.343 4.385 - 4.409: 97.0382% ( 4) 00:14:58.343 4.409 - 4.433: 97.0534% ( 2) 00:14:58.343 4.433 - 4.456: 97.0763% ( 3) 00:14:58.343 4.456 - 4.480: 97.0839% ( 1) 00:14:58.343 4.480 - 4.504: 97.1067% ( 3) 00:14:58.343 4.504 - 4.527: 97.1296% ( 3) 00:14:58.343 4.551 - 4.575: 97.1677% ( 5) 00:14:58.343 4.575 - 4.599: 97.1829% ( 2) 00:14:58.343 4.622 - 4.646: 97.2210% ( 5) 00:14:58.343 4.646 - 4.670: 97.2362% ( 2) 00:14:58.343 4.670 - 4.693: 97.2743% ( 5) 00:14:58.343 4.693 - 4.717: 97.3199% ( 6) 00:14:58.343 4.717 - 4.741: 97.3504% ( 4) 00:14:58.343 4.741 - 4.764: 97.4113% ( 8) 00:14:58.343 4.764 - 4.788: 97.4265% ( 2) 00:14:58.343 4.788 - 4.812: 97.4798% ( 7) 00:14:58.343 4.812 - 4.836: 97.5027% ( 3) 00:14:58.343 4.836 - 4.859: 97.5636% ( 8) 00:14:58.343 4.859 - 4.883: 97.6093% ( 6) 00:14:58.343 4.883 - 4.907: 97.6626% ( 7) 00:14:58.343 4.907 - 4.930: 97.7311% ( 9) 00:14:58.343 4.930 - 4.954: 97.7539% ( 3) 00:14:58.343 4.954 - 4.978: 97.7691% ( 2) 00:14:58.343 4.978 - 5.001: 97.7920% ( 3) 00:14:58.343 5.001 - 5.025: 97.8148% ( 3) 00:14:58.343 5.025 - 5.049: 97.8377% ( 3) 00:14:58.343 5.049 - 5.073: 97.8681% ( 4) 00:14:58.343 5.073 - 5.096: 97.8986% ( 4) 00:14:58.343 5.096 - 5.120: 97.9443% ( 6) 00:14:58.343 5.120 - 5.144: 97.9595% ( 2) 00:14:58.343 5.144 - 5.167: 97.9747% ( 2) 00:14:58.343 5.191 - 5.215: 97.9823% ( 1) 00:14:58.343 5.310 - 5.333: 97.9976% ( 2) 00:14:58.343 5.357 - 5.381: 98.0052% ( 1) 00:14:58.343 5.404 - 5.428: 98.0128% ( 1) 00:14:58.343 5.428 - 5.452: 98.0204% ( 1) 00:14:58.343 5.476 - 5.499: 98.0280% ( 1) 00:14:58.343 5.499 - 5.523: 98.0356% ( 1) 00:14:58.343 5.594 - 5.618: 98.0432% ( 1) 00:14:58.343 5.641 - 5.665: 98.0585% ( 2) 00:14:58.343 5.665 - 5.689: 98.0661% ( 1) 00:14:58.343 5.713 - 5.736: 98.0737% ( 1) 00:14:58.343 5.736 - 5.760: 98.0813% ( 1) 00:14:58.343 5.950 - 5.973: 98.0889% ( 1) 00:14:58.343 6.021 - 6.044: 98.0965% ( 1) 00:14:58.343 6.258 - 6.305: 98.1042% ( 1) 00:14:58.343 6.305 - 6.353: 98.1118% ( 1) 00:14:58.343 6.684 - 6.732: 98.1194% ( 1) 00:14:58.343 6.732 - 6.779: 98.1270% ( 1) 00:14:58.343 6.827 - 6.874: 98.1498% ( 3) 00:14:58.343 7.064 - 7.111: 98.1575% ( 1) 00:14:58.343 7.111 - 7.159: 98.1727% ( 2) 00:14:58.343 7.159 - 7.206: 98.1803% ( 1) 00:14:58.343 7.206 - 7.253: 98.1955% ( 2) 00:14:58.343 7.348 - 7.396: 98.2108% ( 2) 00:14:58.343 7.396 - 7.443: 98.2184% ( 1) 00:14:58.343 7.443 - 7.490: 98.2260% ( 1) 00:14:58.343 7.490 - 7.538: 98.2412% ( 2) 00:14:58.343 7.538 - 7.585: 98.2564% ( 2) 00:14:58.343 7.585 - 7.633: 98.2717% ( 2) 00:14:58.343 7.633 - 7.680: 98.2869% ( 2) 00:14:58.343 7.775 - 7.822: 98.3097% ( 3) 00:14:58.343 7.870 - 7.917: 98.3173% ( 1) 00:14:58.343 7.917 - 7.964: 98.3326% ( 2) 00:14:58.343 7.964 - 8.012: 98.3402% ( 1) 00:14:58.343 8.012 - 8.059: 98.3706% ( 4) 00:14:58.343 8.059 - 8.107: 98.3783% ( 1) 00:14:58.343 8.107 - 8.154: 98.3859% ( 1) 00:14:58.343 8.154 - 8.201: 98.4011% ( 2) 00:14:58.343 8.201 - 8.249: 98.4239% ( 3) 00:14:58.343 8.344 - 8.391: 98.4392% ( 2) 00:14:58.343 8.391 - 8.439: 98.4468% ( 1) 00:14:58.343 8.439 - 8.486: 98.4620% ( 2) 00:14:58.343 8.486 - 8.533: 98.4696% ( 1) 00:14:58.343 8.533 - 8.581: 98.4772% ( 1) 00:14:58.343 8.628 - 8.676: 98.4848% ( 1) 00:14:58.343 8.676 - 8.723: 98.4925% ( 1) 00:14:58.343 8.770 - 8.818: 98.5077% ( 2) 00:14:58.343 8.960 - 9.007: 98.5153% ( 1) 00:14:58.343 9.102 - 9.150: 98.5229% ( 1) 00:14:58.343 9.197 - 9.244: 98.5305% ( 1) 00:14:58.343 9.244 - 9.292: 98.5381% ( 1) 00:14:58.343 9.292 - 9.339: 98.5458% ( 1) 00:14:58.343 9.339 - 9.387: 98.5534% ( 1) 00:14:58.343 9.387 - 9.434: 98.5610% ( 1) 00:14:58.343 9.481 - 9.529: 98.5686% ( 1) 00:14:58.343 9.529 - 9.576: 98.5762% ( 1) 00:14:58.343 9.671 - 9.719: 98.5914% ( 2) 00:14:58.343 9.813 - 9.861: 98.5991% ( 1) 00:14:58.343 10.335 - 10.382: 98.6067% ( 1) 00:14:58.343 10.477 - 10.524: 98.6143% ( 1) 00:14:58.344 10.524 - 10.572: 98.6219% ( 1) 00:14:58.344 10.667 - 10.714: 98.6295% ( 1) 00:14:58.344 10.904 - 10.951: 98.6371% ( 1) 00:14:58.344 10.951 - 10.999: 98.6447% ( 1) 00:14:58.344 10.999 - 11.046: 98.6524% ( 1) 00:14:58.344 11.141 - 11.188: 98.6600% ( 1) 00:14:58.344 11.236 - 11.283: 98.6752% ( 2) 00:14:58.344 11.378 - 11.425: 98.6828% ( 1) 00:14:58.344 11.473 - 11.520: 98.6904% ( 1) 00:14:58.344 11.662 - 11.710: 98.6980% ( 1) 00:14:58.344 11.710 - 11.757: 98.7056% ( 1) 00:14:58.344 11.852 - 11.899: 98.7133% ( 1) 00:14:58.344 12.231 - 12.326: 98.7285% ( 2) 00:14:58.344 12.421 - 12.516: 98.7437% ( 2) 00:14:58.344 12.516 - 12.610: 98.7742% ( 4) 00:14:58.344 12.610 - 12.705: 98.7818% ( 1) 00:14:58.344 12.705 - 12.800: 98.8122% ( 4) 00:14:58.344 12.990 - 13.084: 98.8199% ( 1) 00:14:58.344 13.084 - 13.179: 98.8275% ( 1) 00:14:58.344 13.274 - 13.369: 98.8351% ( 1) 00:14:58.344 13.559 - 13.653: 98.8427% ( 1) 00:14:58.344 13.843 - 13.938: 98.8503% ( 1) 00:14:58.344 13.938 - 14.033: 98.8732% ( 3) 00:14:58.344 14.127 - 14.222: 98.8808% ( 1) 00:14:58.344 14.222 - 14.317: 98.8960% ( 2) 00:14:58.344 14.791 - 14.886: 98.9036% ( 1) 00:14:58.344 14.886 - 14.981: 98.9112% ( 1) 00:14:58.344 15.170 - 15.265: 98.9188% ( 1) 00:14:58.344 15.360 - 15.455: 98.9265% ( 1) 00:14:58.344 17.256 - 17.351: 98.9493% ( 3) 00:14:58.344 17.446 - 17.541: 98.9797% ( 4) 00:14:58.344 17.541 - 17.636: 99.0254% ( 6) 00:14:58.344 17.636 - 17.730: 99.0559% ( 4) 00:14:58.344 17.730 - 17.825: 99.1092% ( 7) 00:14:58.344 17.825 - 17.920: 99.1701% ( 8) 00:14:58.344 17.920 - 18.015: 99.2386% ( 9) 00:14:58.344 18.015 - 18.110: 99.3224% ( 11) 00:14:58.344 18.110 - 18.204: 99.3681% ( 6) 00:14:58.344 18.204 - 18.299: 99.4137% ( 6) 00:14:58.344 18.299 - 18.394: 99.5356% ( 16) 00:14:58.344 18.394 - 18.489: 99.6345% ( 13) 00:14:58.344 18.489 - 18.584: 99.7031% ( 9) 00:14:58.344 18.584 - 18.679: 99.7411% ( 5) 00:14:58.344 18.679 - 18.773: 99.7564% ( 2) 00:14:58.344 18.773 - 18.868: 99.7716% ( 2) 00:14:58.344 18.868 - 18.963: 99.8173% ( 6) 00:14:58.344 18.963 - 19.058: 99.8325% ( 2) 00:14:58.344 19.058 - 19.153: 99.8477% ( 2) 00:14:58.344 19.153 - 19.247: 99.8706% ( 3) 00:14:58.344 19.247 - 19.342: 99.8782% ( 1) 00:14:58.344 19.437 - 19.532: 99.8934% ( 2) 00:14:58.344 21.049 - 21.144: 99.9010% ( 1) 00:14:58.344 21.333 - 21.428: 99.9086% ( 1) 00:14:58.344 27.307 - 27.496: 99.9162% ( 1) 00:14:58.344 3980.705 - 4004.978: 100.0000% ( 11) 00:14:58.344 00:14:58.344 Complete histogram 00:14:58.344 ================== 00:14:58.344 Range in us Cumulative Count 00:14:58.344 2.062 - 2.074: 0.0761% ( 10) 00:14:58.344 2.074 - 2.086: 15.3495% ( 2006) 00:14:58.344 2.086 - 2.098: 40.0259% ( 3241) 00:14:58.344 2.098 - 2.110: 43.7795% ( 493) 00:14:58.344 2.110 - 2.121: 52.5430% ( 1151) 00:14:58.344 2.121 - 2.133: 57.0199% ( 588) 00:14:58.344 2.133 - 2.145: 59.3117% ( 301) 00:14:58.344 2.145 - 2.157: 69.6665% ( 1360) 00:14:58.344 2.157 - 2.169: 76.1154% ( 847) 00:14:58.344 2.169 - 2.181: 77.7905% ( 220) 00:14:58.344 2.181 - 2.193: 82.0542% ( 560) 00:14:58.344 2.193 - 2.204: 84.0719% ( 265) 00:14:58.344 2.204 - 2.216: 84.8637% ( 104) 00:14:58.344 2.216 - 2.228: 87.9549% ( 406) 00:14:58.344 2.228 - 2.240: 91.4954% ( 465) 00:14:58.344 2.240 - 2.252: 92.6222% ( 148) 00:14:58.344 2.252 - 2.264: 93.9089% ( 169) 00:14:58.344 2.264 - 2.276: 94.6932% ( 103) 00:14:58.344 2.276 - 2.287: 94.9901% ( 39) 00:14:58.344 2.287 - 2.299: 95.2337% ( 32) 00:14:58.344 2.299 - 2.311: 95.7210% ( 64) 00:14:58.344 2.311 - 2.323: 95.9875% ( 35) 00:14:58.344 2.323 - 2.335: 96.0408% ( 7) 00:14:58.344 2.335 - 2.347: 96.0789% ( 5) 00:14:58.344 2.347 - 2.359: 96.2007% ( 16) 00:14:58.344 2.359 - 2.370: 96.4215% ( 29) 00:14:58.344 2.370 - 2.382: 96.7108% ( 38) 00:14:58.344 2.382 - 2.394: 97.0611% ( 46) 00:14:58.344 2.394 - 2.406: 97.3199% ( 34) 00:14:58.344 2.406 - 2.418: 97.5864% ( 35) 00:14:58.344 2.418 - 2.430: 97.7539% ( 22) 00:14:58.344 2.430 - 2.441: 97.9214% ( 22) 00:14:58.344 2.441 - 2.453: 98.0128% ( 12) 00:14:58.344 2.453 - 2.465: 98.1194% ( 14) 00:14:58.344 2.465 - 2.477: 98.1727% ( 7) 00:14:58.344 2.477 - 2.489: 98.2031% ( 4) 00:14:58.344 2.489 - 2.501: 98.2717% ( 9) 00:14:58.344 2.501 - 2.513: 98.2869% ( 2) 00:14:58.344 2.513 - 2.524: 98.2945% ( 1) 00:14:58.344 2.524 - 2.536: 98.3250% ( 4) 00:14:58.344 2.536 - 2.548: 98.3478% ( 3) 00:14:58.344 2.548 - 2.560: 98.3630% ( 2) 00:14:58.344 2.584 - 2.596: 98.3783% ( 2) 00:14:58.344 2.596 - 2.607: 98.3859% ( 1) 00:14:58.344 2.667 - 2.679: 98.3935% ( 1) 00:14:58.344 2.679 - 2.690: 98.4011% ( 1) 00:14:58.344 2.726 - 2.738: 98.4087% ( 1) 00:14:58.344 2.738 - 2.750: 98.4163% ( 1) 00:14:58.344 2.773 - 2.785: 98.4239% ( 1) 00:14:58.344 2.797 - 2.809: 98.4316% ( 1) 00:14:58.344 2.821 - 2.833: 98.4468% ( 2) 00:14:58.344 2.892 - 2.904: 98.4620% ( 2) 00:14:58.344 2.927 - 2.939: 98.4696% ( 1) 00:14:58.344 3.129 - 3.153: 98.4772% ( 1) 00:14:58.344 3.176 - 3.200: 98.4925% ( 2) 00:14:58.344 3.200 - 3.224: 98.5077% ( 2) 00:14:58.344 3.247 - 3.271: 98.5229% ( 2) 00:14:58.344 3.271 - 3.295: 98.5381% ( 2) 00:14:58.344 3.295 - 3.319: 98.5458% ( 1) 00:14:58.344 3.319 - 3.342: 98.5610% ( 2) 00:14:58.344 3.366 - 3.390: 98.5762% ( 2) 00:14:58.344 3.390 - 3.413: 98.5838% ( 1) 00:14:58.344 3.437 - 3.461: 98.5914% ( 1) 00:14:58.344 3.484 - 3.508: 98.6067% ( 2) 00:14:58.344 3.532 - 3.556: 98.6143% ( 1) 00:14:58.344 3.556 - 3.579: 98.6295% ( 2) 00:14:58.344 3.579 - 3.603: 98.6371% ( 1) 00:14:58.344 3.603 - 3.627: 98.6447% ( 1) 00:14:58.344 3.816 - 3.840: 98.6524% ( 1) 00:14:58.344 4.006 - 4.030: 9[2024-07-25 19:05:50.755723] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:58.344 8.6600% ( 1) 00:14:58.344 5.215 - 5.239: 98.6752% ( 2) 00:14:58.344 5.239 - 5.262: 98.6828% ( 1) 00:14:58.344 5.452 - 5.476: 98.6904% ( 1) 00:14:58.344 5.523 - 5.547: 98.6980% ( 1) 00:14:58.344 5.594 - 5.618: 98.7056% ( 1) 00:14:58.344 5.618 - 5.641: 98.7133% ( 1) 00:14:58.344 5.926 - 5.950: 98.7285% ( 2) 00:14:58.344 5.973 - 5.997: 98.7361% ( 1) 00:14:58.344 6.163 - 6.210: 98.7513% ( 2) 00:14:58.344 6.210 - 6.258: 98.7589% ( 1) 00:14:58.344 6.258 - 6.305: 98.7666% ( 1) 00:14:58.344 6.495 - 6.542: 98.7742% ( 1) 00:14:58.344 6.684 - 6.732: 98.7818% ( 1) 00:14:58.344 6.827 - 6.874: 98.7894% ( 1) 00:14:58.344 7.396 - 7.443: 98.7970% ( 1) 00:14:58.344 15.455 - 15.550: 98.8046% ( 1) 00:14:58.344 15.644 - 15.739: 98.8122% ( 1) 00:14:58.344 15.834 - 15.929: 98.8427% ( 4) 00:14:58.344 15.929 - 16.024: 98.8884% ( 6) 00:14:58.344 16.024 - 16.119: 98.9341% ( 6) 00:14:58.344 16.119 - 16.213: 98.9645% ( 4) 00:14:58.344 16.213 - 16.308: 98.9950% ( 4) 00:14:58.344 16.308 - 16.403: 99.0254% ( 4) 00:14:58.344 16.403 - 16.498: 99.0940% ( 9) 00:14:58.344 16.498 - 16.593: 99.1092% ( 2) 00:14:58.344 16.593 - 16.687: 99.1473% ( 5) 00:14:58.344 16.687 - 16.782: 99.1929% ( 6) 00:14:58.344 16.782 - 16.877: 99.2462% ( 7) 00:14:58.344 16.877 - 16.972: 99.2691% ( 3) 00:14:58.344 17.161 - 17.256: 99.2767% ( 1) 00:14:58.344 17.256 - 17.351: 99.2995% ( 3) 00:14:58.344 17.636 - 17.730: 99.3071% ( 1) 00:14:58.344 17.825 - 17.920: 99.3148% ( 1) 00:14:58.344 18.015 - 18.110: 99.3224% ( 1) 00:14:58.344 18.489 - 18.584: 99.3300% ( 1) 00:14:58.344 22.756 - 22.850: 99.3376% ( 1) 00:14:58.344 27.307 - 27.496: 99.3452% ( 1) 00:14:58.344 3980.705 - 4004.978: 99.9239% ( 76) 00:14:58.344 4004.978 - 4029.250: 100.0000% ( 10) 00:14:58.344 00:14:58.344 19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:58.344 19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:58.344 19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:58.344 19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:58.344 19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:58.602 [ 00:14:58.602 { 00:14:58.602 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:58.602 "subtype": "Discovery", 00:14:58.602 "listen_addresses": [], 00:14:58.602 "allow_any_host": true, 00:14:58.602 "hosts": [] 00:14:58.602 }, 00:14:58.602 { 00:14:58.602 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:58.602 "subtype": "NVMe", 00:14:58.602 "listen_addresses": [ 00:14:58.602 { 00:14:58.602 "trtype": "VFIOUSER", 00:14:58.602 "adrfam": "IPv4", 00:14:58.602 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:58.602 "trsvcid": "0" 00:14:58.602 } 00:14:58.602 ], 00:14:58.602 "allow_any_host": true, 00:14:58.602 "hosts": [], 00:14:58.602 "serial_number": "SPDK1", 00:14:58.602 "model_number": "SPDK bdev Controller", 00:14:58.602 "max_namespaces": 32, 00:14:58.602 "min_cntlid": 1, 00:14:58.602 "max_cntlid": 65519, 00:14:58.602 "namespaces": [ 00:14:58.602 { 00:14:58.602 "nsid": 1, 00:14:58.602 "bdev_name": "Malloc1", 00:14:58.602 "name": "Malloc1", 00:14:58.602 "nguid": "AE6DA195A80B43CF99D62A6CC18A94B8", 00:14:58.602 "uuid": "ae6da195-a80b-43cf-99d6-2a6cc18a94b8" 00:14:58.602 } 00:14:58.602 ] 00:14:58.602 }, 00:14:58.602 { 00:14:58.602 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:58.602 "subtype": "NVMe", 00:14:58.602 "listen_addresses": [ 00:14:58.602 { 00:14:58.603 "trtype": "VFIOUSER", 00:14:58.603 "adrfam": "IPv4", 00:14:58.603 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:58.603 "trsvcid": "0" 00:14:58.603 } 00:14:58.603 ], 00:14:58.603 "allow_any_host": true, 00:14:58.603 "hosts": [], 00:14:58.603 "serial_number": "SPDK2", 00:14:58.603 "model_number": "SPDK bdev Controller", 00:14:58.603 "max_namespaces": 32, 00:14:58.603 "min_cntlid": 1, 00:14:58.603 "max_cntlid": 65519, 00:14:58.603 "namespaces": [ 00:14:58.603 { 00:14:58.603 "nsid": 1, 00:14:58.603 "bdev_name": "Malloc2", 00:14:58.603 "name": "Malloc2", 00:14:58.603 "nguid": "14A993D5519D4C9ABD18300B4185DE49", 00:14:58.603 "uuid": "14a993d5-519d-4c9a-bd18-300b4185de49" 00:14:58.603 } 00:14:58.603 ] 00:14:58.603 } 00:14:58.603 ] 00:14:58.603 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:58.603 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=883434 00:14:58.603 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:58.603 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:58.603 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:58.603 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:58.603 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:58.603 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:58.603 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:58.603 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:58.861 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.861 [2024-07-25 19:05:51.224595] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:59.119 Malloc3 00:14:59.119 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:59.119 [2024-07-25 19:05:51.586392] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:59.377 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:59.377 Asynchronous Event Request test 00:14:59.377 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:59.377 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:59.377 Registering asynchronous event callbacks... 00:14:59.377 Starting namespace attribute notice tests for all controllers... 00:14:59.377 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:59.377 aer_cb - Changed Namespace 00:14:59.377 Cleaning up... 00:14:59.377 [ 00:14:59.377 { 00:14:59.377 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:59.377 "subtype": "Discovery", 00:14:59.377 "listen_addresses": [], 00:14:59.377 "allow_any_host": true, 00:14:59.377 "hosts": [] 00:14:59.377 }, 00:14:59.377 { 00:14:59.377 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:59.377 "subtype": "NVMe", 00:14:59.377 "listen_addresses": [ 00:14:59.377 { 00:14:59.377 "trtype": "VFIOUSER", 00:14:59.377 "adrfam": "IPv4", 00:14:59.377 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:59.377 "trsvcid": "0" 00:14:59.377 } 00:14:59.377 ], 00:14:59.377 "allow_any_host": true, 00:14:59.377 "hosts": [], 00:14:59.377 "serial_number": "SPDK1", 00:14:59.377 "model_number": "SPDK bdev Controller", 00:14:59.377 "max_namespaces": 32, 00:14:59.377 "min_cntlid": 1, 00:14:59.377 "max_cntlid": 65519, 00:14:59.377 "namespaces": [ 00:14:59.377 { 00:14:59.377 "nsid": 1, 00:14:59.377 "bdev_name": "Malloc1", 00:14:59.377 "name": "Malloc1", 00:14:59.377 "nguid": "AE6DA195A80B43CF99D62A6CC18A94B8", 00:14:59.377 "uuid": "ae6da195-a80b-43cf-99d6-2a6cc18a94b8" 00:14:59.377 }, 00:14:59.377 { 00:14:59.377 "nsid": 2, 00:14:59.377 "bdev_name": "Malloc3", 00:14:59.377 "name": "Malloc3", 00:14:59.377 "nguid": "6EFBE223924D462991D2ACFD7561C55C", 00:14:59.377 "uuid": "6efbe223-924d-4629-91d2-acfd7561c55c" 00:14:59.377 } 00:14:59.377 ] 00:14:59.377 }, 00:14:59.377 { 00:14:59.377 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:59.377 "subtype": "NVMe", 00:14:59.377 "listen_addresses": [ 00:14:59.377 { 00:14:59.377 "trtype": "VFIOUSER", 00:14:59.377 "adrfam": "IPv4", 00:14:59.377 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:59.377 "trsvcid": "0" 00:14:59.377 } 00:14:59.377 ], 00:14:59.377 "allow_any_host": true, 00:14:59.377 "hosts": [], 00:14:59.377 "serial_number": "SPDK2", 00:14:59.377 "model_number": "SPDK bdev Controller", 00:14:59.377 "max_namespaces": 32, 00:14:59.377 "min_cntlid": 1, 00:14:59.377 "max_cntlid": 65519, 00:14:59.377 "namespaces": [ 00:14:59.377 { 00:14:59.377 "nsid": 1, 00:14:59.377 "bdev_name": "Malloc2", 00:14:59.377 "name": "Malloc2", 00:14:59.377 "nguid": "14A993D5519D4C9ABD18300B4185DE49", 00:14:59.377 "uuid": "14a993d5-519d-4c9a-bd18-300b4185de49" 00:14:59.377 } 00:14:59.377 ] 00:14:59.377 } 00:14:59.377 ] 00:14:59.377 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 883434 00:14:59.377 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:59.377 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:59.377 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:59.377 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:59.637 [2024-07-25 19:05:51.865480] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:59.637 [2024-07-25 19:05:51.865531] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid883555 ] 00:14:59.637 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.637 [2024-07-25 19:05:51.900298] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:59.637 [2024-07-25 19:05:51.908658] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:59.637 [2024-07-25 19:05:51.908686] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbd5ecf3000 00:14:59.637 [2024-07-25 19:05:51.909661] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.637 [2024-07-25 19:05:51.910665] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.637 [2024-07-25 19:05:51.911665] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.637 [2024-07-25 19:05:51.912677] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:59.637 [2024-07-25 19:05:51.913680] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:59.637 [2024-07-25 19:05:51.914686] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.637 [2024-07-25 19:05:51.915694] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:59.637 [2024-07-25 19:05:51.916700] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.637 [2024-07-25 19:05:51.917712] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:59.637 [2024-07-25 19:05:51.917733] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbd5ece8000 00:14:59.637 [2024-07-25 19:05:51.919115] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:59.637 [2024-07-25 19:05:51.937462] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:59.637 [2024-07-25 19:05:51.937496] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:59.637 [2024-07-25 19:05:51.942584] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:59.637 [2024-07-25 19:05:51.942635] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:59.637 [2024-07-25 19:05:51.942719] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:59.637 [2024-07-25 19:05:51.942741] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:59.637 [2024-07-25 19:05:51.942750] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:59.637 [2024-07-25 19:05:51.943586] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:59.637 [2024-07-25 19:05:51.943610] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:59.637 [2024-07-25 19:05:51.943624] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:59.637 [2024-07-25 19:05:51.944598] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:59.637 [2024-07-25 19:05:51.944618] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:59.637 [2024-07-25 19:05:51.944632] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:59.637 [2024-07-25 19:05:51.945606] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:59.637 [2024-07-25 19:05:51.945626] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:59.637 [2024-07-25 19:05:51.946609] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:59.637 [2024-07-25 19:05:51.946629] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:59.637 [2024-07-25 19:05:51.946639] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:59.637 [2024-07-25 19:05:51.946650] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:59.637 [2024-07-25 19:05:51.946759] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:59.638 [2024-07-25 19:05:51.946767] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:59.638 [2024-07-25 19:05:51.946775] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:59.638 [2024-07-25 19:05:51.947615] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:59.638 [2024-07-25 19:05:51.948616] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:59.638 [2024-07-25 19:05:51.949623] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:59.638 [2024-07-25 19:05:51.950620] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:59.638 [2024-07-25 19:05:51.950712] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:59.638 [2024-07-25 19:05:51.951639] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:59.638 [2024-07-25 19:05:51.951660] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:59.638 [2024-07-25 19:05:51.951669] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:59.638 [2024-07-25 19:05:51.951694] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:59.638 [2024-07-25 19:05:51.951711] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:59.638 [2024-07-25 19:05:51.951733] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:59.638 [2024-07-25 19:05:51.951742] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:59.638 [2024-07-25 19:05:51.951752] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:59.638 [2024-07-25 19:05:51.951771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:59.638 [2024-07-25 19:05:51.958114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:59.638 [2024-07-25 19:05:51.958137] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:59.638 [2024-07-25 19:05:51.958146] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:59.638 [2024-07-25 19:05:51.958154] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:59.638 [2024-07-25 19:05:51.958166] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:59.638 [2024-07-25 19:05:51.958174] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:59.638 [2024-07-25 19:05:51.958182] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:59.638 [2024-07-25 19:05:51.958190] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:59.638 [2024-07-25 19:05:51.958203] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:59.638 [2024-07-25 19:05:51.958224] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:59.638 [2024-07-25 19:05:51.966116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:59.638 [2024-07-25 19:05:51.966143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.638 [2024-07-25 19:05:51.966158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.638 [2024-07-25 19:05:51.966170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.638 [2024-07-25 19:05:51.966182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.638 [2024-07-25 19:05:51.966191] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:59.638 [2024-07-25 19:05:51.966206] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:59.638 [2024-07-25 19:05:51.966221] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:59.638 [2024-07-25 19:05:51.972115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:59.638 [2024-07-25 19:05:51.972135] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:59.638 [2024-07-25 19:05:51.972144] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:59.638 [2024-07-25 19:05:51.972166] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:59.638 [2024-07-25 19:05:51.972176] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:59.638 [2024-07-25 19:05:51.972194] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:59.638 [2024-07-25 19:05:51.981111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:59.638 [2024-07-25 19:05:51.981203] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:59.638 [2024-07-25 19:05:51.981221] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:59.638 [2024-07-25 19:05:51.981235] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:59.638 [2024-07-25 19:05:51.981244] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:59.638 [2024-07-25 19:05:51.981250] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:59.638 [2024-07-25 19:05:51.981260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:59.638 [2024-07-25 19:05:51.989113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:59.638 [2024-07-25 19:05:51.989147] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:59.638 [2024-07-25 19:05:51.989164] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:59.638 [2024-07-25 19:05:51.989179] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:59.638 [2024-07-25 19:05:51.989193] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:59.638 [2024-07-25 19:05:51.989202] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:59.638 [2024-07-25 19:05:51.989208] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:59.638 [2024-07-25 19:05:51.989217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:59.638 [2024-07-25 19:05:51.997113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:59.638 [2024-07-25 19:05:51.997166] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:59.638 [2024-07-25 19:05:51.997183] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:59.638 [2024-07-25 19:05:51.997197] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:59.638 [2024-07-25 19:05:51.997206] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:59.638 [2024-07-25 19:05:51.997213] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:59.638 [2024-07-25 19:05:51.997223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:59.638 [2024-07-25 19:05:52.005112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:59.638 [2024-07-25 19:05:52.005134] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:59.638 [2024-07-25 19:05:52.005159] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:59.638 [2024-07-25 19:05:52.005173] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:59.638 [2024-07-25 19:05:52.005192] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:14:59.638 [2024-07-25 19:05:52.005202] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:59.638 [2024-07-25 19:05:52.005211] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:59.638 [2024-07-25 19:05:52.005220] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:59.638 [2024-07-25 19:05:52.005228] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:59.638 [2024-07-25 19:05:52.005236] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:59.638 [2024-07-25 19:05:52.005260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:59.638 [2024-07-25 19:05:52.013114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:59.638 [2024-07-25 19:05:52.013140] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:59.638 [2024-07-25 19:05:52.021114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:59.638 [2024-07-25 19:05:52.021139] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:59.638 [2024-07-25 19:05:52.029114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:59.638 [2024-07-25 19:05:52.029139] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:59.639 [2024-07-25 19:05:52.037115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:59.639 [2024-07-25 19:05:52.037156] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:59.639 [2024-07-25 19:05:52.037167] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:59.639 [2024-07-25 19:05:52.037173] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:59.639 [2024-07-25 19:05:52.037179] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:59.639 [2024-07-25 19:05:52.037184] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:59.639 [2024-07-25 19:05:52.037194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:59.639 [2024-07-25 19:05:52.037206] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:59.639 [2024-07-25 19:05:52.037214] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:59.639 [2024-07-25 19:05:52.037220] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:59.639 [2024-07-25 19:05:52.037229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:59.639 [2024-07-25 19:05:52.037240] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:59.639 [2024-07-25 19:05:52.037248] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:59.639 [2024-07-25 19:05:52.037253] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:59.639 [2024-07-25 19:05:52.037265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:59.639 [2024-07-25 19:05:52.037278] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:59.639 [2024-07-25 19:05:52.037286] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:59.639 [2024-07-25 19:05:52.037292] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:59.639 [2024-07-25 19:05:52.037301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:59.639 [2024-07-25 19:05:52.044114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:59.639 [2024-07-25 19:05:52.044156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:59.639 [2024-07-25 19:05:52.044173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:59.639 [2024-07-25 19:05:52.044186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:59.639 ===================================================== 00:14:59.639 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:59.639 ===================================================== 00:14:59.639 Controller Capabilities/Features 00:14:59.639 ================================ 00:14:59.639 Vendor ID: 4e58 00:14:59.639 Subsystem Vendor ID: 4e58 00:14:59.639 Serial Number: SPDK2 00:14:59.639 Model Number: SPDK bdev Controller 00:14:59.639 Firmware Version: 24.09 00:14:59.639 Recommended Arb Burst: 6 00:14:59.639 IEEE OUI Identifier: 8d 6b 50 00:14:59.639 Multi-path I/O 00:14:59.639 May have multiple subsystem ports: Yes 00:14:59.639 May have multiple controllers: Yes 00:14:59.639 Associated with SR-IOV VF: No 00:14:59.639 Max Data Transfer Size: 131072 00:14:59.639 Max Number of Namespaces: 32 00:14:59.639 Max Number of I/O Queues: 127 00:14:59.639 NVMe Specification Version (VS): 1.3 00:14:59.639 NVMe Specification Version (Identify): 1.3 00:14:59.639 Maximum Queue Entries: 256 00:14:59.639 Contiguous Queues Required: Yes 00:14:59.639 Arbitration Mechanisms Supported 00:14:59.639 Weighted Round Robin: Not Supported 00:14:59.639 Vendor Specific: Not Supported 00:14:59.639 Reset Timeout: 15000 ms 00:14:59.639 Doorbell Stride: 4 bytes 00:14:59.639 NVM Subsystem Reset: Not Supported 00:14:59.639 Command Sets Supported 00:14:59.639 NVM Command Set: Supported 00:14:59.639 Boot Partition: Not Supported 00:14:59.639 Memory Page Size Minimum: 4096 bytes 00:14:59.639 Memory Page Size Maximum: 4096 bytes 00:14:59.639 Persistent Memory Region: Not Supported 00:14:59.639 Optional Asynchronous Events Supported 00:14:59.639 Namespace Attribute Notices: Supported 00:14:59.639 Firmware Activation Notices: Not Supported 00:14:59.639 ANA Change Notices: Not Supported 00:14:59.639 PLE Aggregate Log Change Notices: Not Supported 00:14:59.639 LBA Status Info Alert Notices: Not Supported 00:14:59.639 EGE Aggregate Log Change Notices: Not Supported 00:14:59.639 Normal NVM Subsystem Shutdown event: Not Supported 00:14:59.639 Zone Descriptor Change Notices: Not Supported 00:14:59.639 Discovery Log Change Notices: Not Supported 00:14:59.639 Controller Attributes 00:14:59.639 128-bit Host Identifier: Supported 00:14:59.639 Non-Operational Permissive Mode: Not Supported 00:14:59.639 NVM Sets: Not Supported 00:14:59.639 Read Recovery Levels: Not Supported 00:14:59.639 Endurance Groups: Not Supported 00:14:59.639 Predictable Latency Mode: Not Supported 00:14:59.639 Traffic Based Keep ALive: Not Supported 00:14:59.639 Namespace Granularity: Not Supported 00:14:59.639 SQ Associations: Not Supported 00:14:59.639 UUID List: Not Supported 00:14:59.639 Multi-Domain Subsystem: Not Supported 00:14:59.639 Fixed Capacity Management: Not Supported 00:14:59.639 Variable Capacity Management: Not Supported 00:14:59.639 Delete Endurance Group: Not Supported 00:14:59.639 Delete NVM Set: Not Supported 00:14:59.639 Extended LBA Formats Supported: Not Supported 00:14:59.639 Flexible Data Placement Supported: Not Supported 00:14:59.639 00:14:59.639 Controller Memory Buffer Support 00:14:59.639 ================================ 00:14:59.639 Supported: No 00:14:59.639 00:14:59.639 Persistent Memory Region Support 00:14:59.639 ================================ 00:14:59.639 Supported: No 00:14:59.639 00:14:59.639 Admin Command Set Attributes 00:14:59.639 ============================ 00:14:59.639 Security Send/Receive: Not Supported 00:14:59.639 Format NVM: Not Supported 00:14:59.639 Firmware Activate/Download: Not Supported 00:14:59.639 Namespace Management: Not Supported 00:14:59.639 Device Self-Test: Not Supported 00:14:59.639 Directives: Not Supported 00:14:59.639 NVMe-MI: Not Supported 00:14:59.639 Virtualization Management: Not Supported 00:14:59.639 Doorbell Buffer Config: Not Supported 00:14:59.639 Get LBA Status Capability: Not Supported 00:14:59.639 Command & Feature Lockdown Capability: Not Supported 00:14:59.639 Abort Command Limit: 4 00:14:59.639 Async Event Request Limit: 4 00:14:59.639 Number of Firmware Slots: N/A 00:14:59.639 Firmware Slot 1 Read-Only: N/A 00:14:59.639 Firmware Activation Without Reset: N/A 00:14:59.639 Multiple Update Detection Support: N/A 00:14:59.639 Firmware Update Granularity: No Information Provided 00:14:59.639 Per-Namespace SMART Log: No 00:14:59.639 Asymmetric Namespace Access Log Page: Not Supported 00:14:59.639 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:59.639 Command Effects Log Page: Supported 00:14:59.639 Get Log Page Extended Data: Supported 00:14:59.639 Telemetry Log Pages: Not Supported 00:14:59.639 Persistent Event Log Pages: Not Supported 00:14:59.639 Supported Log Pages Log Page: May Support 00:14:59.639 Commands Supported & Effects Log Page: Not Supported 00:14:59.639 Feature Identifiers & Effects Log Page:May Support 00:14:59.639 NVMe-MI Commands & Effects Log Page: May Support 00:14:59.639 Data Area 4 for Telemetry Log: Not Supported 00:14:59.639 Error Log Page Entries Supported: 128 00:14:59.639 Keep Alive: Supported 00:14:59.639 Keep Alive Granularity: 10000 ms 00:14:59.639 00:14:59.639 NVM Command Set Attributes 00:14:59.639 ========================== 00:14:59.639 Submission Queue Entry Size 00:14:59.639 Max: 64 00:14:59.639 Min: 64 00:14:59.639 Completion Queue Entry Size 00:14:59.639 Max: 16 00:14:59.639 Min: 16 00:14:59.639 Number of Namespaces: 32 00:14:59.639 Compare Command: Supported 00:14:59.639 Write Uncorrectable Command: Not Supported 00:14:59.639 Dataset Management Command: Supported 00:14:59.639 Write Zeroes Command: Supported 00:14:59.639 Set Features Save Field: Not Supported 00:14:59.639 Reservations: Not Supported 00:14:59.639 Timestamp: Not Supported 00:14:59.639 Copy: Supported 00:14:59.639 Volatile Write Cache: Present 00:14:59.639 Atomic Write Unit (Normal): 1 00:14:59.639 Atomic Write Unit (PFail): 1 00:14:59.639 Atomic Compare & Write Unit: 1 00:14:59.639 Fused Compare & Write: Supported 00:14:59.639 Scatter-Gather List 00:14:59.639 SGL Command Set: Supported (Dword aligned) 00:14:59.639 SGL Keyed: Not Supported 00:14:59.639 SGL Bit Bucket Descriptor: Not Supported 00:14:59.639 SGL Metadata Pointer: Not Supported 00:14:59.639 Oversized SGL: Not Supported 00:14:59.639 SGL Metadata Address: Not Supported 00:14:59.639 SGL Offset: Not Supported 00:14:59.639 Transport SGL Data Block: Not Supported 00:14:59.639 Replay Protected Memory Block: Not Supported 00:14:59.639 00:14:59.639 Firmware Slot Information 00:14:59.640 ========================= 00:14:59.640 Active slot: 1 00:14:59.640 Slot 1 Firmware Revision: 24.09 00:14:59.640 00:14:59.640 00:14:59.640 Commands Supported and Effects 00:14:59.640 ============================== 00:14:59.640 Admin Commands 00:14:59.640 -------------- 00:14:59.640 Get Log Page (02h): Supported 00:14:59.640 Identify (06h): Supported 00:14:59.640 Abort (08h): Supported 00:14:59.640 Set Features (09h): Supported 00:14:59.640 Get Features (0Ah): Supported 00:14:59.640 Asynchronous Event Request (0Ch): Supported 00:14:59.640 Keep Alive (18h): Supported 00:14:59.640 I/O Commands 00:14:59.640 ------------ 00:14:59.640 Flush (00h): Supported LBA-Change 00:14:59.640 Write (01h): Supported LBA-Change 00:14:59.640 Read (02h): Supported 00:14:59.640 Compare (05h): Supported 00:14:59.640 Write Zeroes (08h): Supported LBA-Change 00:14:59.640 Dataset Management (09h): Supported LBA-Change 00:14:59.640 Copy (19h): Supported LBA-Change 00:14:59.640 00:14:59.640 Error Log 00:14:59.640 ========= 00:14:59.640 00:14:59.640 Arbitration 00:14:59.640 =========== 00:14:59.640 Arbitration Burst: 1 00:14:59.640 00:14:59.640 Power Management 00:14:59.640 ================ 00:14:59.640 Number of Power States: 1 00:14:59.640 Current Power State: Power State #0 00:14:59.640 Power State #0: 00:14:59.640 Max Power: 0.00 W 00:14:59.640 Non-Operational State: Operational 00:14:59.640 Entry Latency: Not Reported 00:14:59.640 Exit Latency: Not Reported 00:14:59.640 Relative Read Throughput: 0 00:14:59.640 Relative Read Latency: 0 00:14:59.640 Relative Write Throughput: 0 00:14:59.640 Relative Write Latency: 0 00:14:59.640 Idle Power: Not Reported 00:14:59.640 Active Power: Not Reported 00:14:59.640 Non-Operational Permissive Mode: Not Supported 00:14:59.640 00:14:59.640 Health Information 00:14:59.640 ================== 00:14:59.640 Critical Warnings: 00:14:59.640 Available Spare Space: OK 00:14:59.640 Temperature: OK 00:14:59.640 Device Reliability: OK 00:14:59.640 Read Only: No 00:14:59.640 Volatile Memory Backup: OK 00:14:59.640 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:59.640 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:59.640 Available Spare: 0% 00:14:59.640 Available Sp[2024-07-25 19:05:52.044321] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:59.640 [2024-07-25 19:05:52.052115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:59.640 [2024-07-25 19:05:52.052165] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:59.640 [2024-07-25 19:05:52.052182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.640 [2024-07-25 19:05:52.052193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.640 [2024-07-25 19:05:52.052202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.640 [2024-07-25 19:05:52.052212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.640 [2024-07-25 19:05:52.052277] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:59.640 [2024-07-25 19:05:52.052298] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:59.640 [2024-07-25 19:05:52.053275] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:59.640 [2024-07-25 19:05:52.053360] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:59.640 [2024-07-25 19:05:52.053376] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:59.640 [2024-07-25 19:05:52.056114] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:59.640 [2024-07-25 19:05:52.056138] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 2 milliseconds 00:14:59.640 [2024-07-25 19:05:52.056190] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:59.640 [2024-07-25 19:05:52.057379] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:59.640 are Threshold: 0% 00:14:59.640 Life Percentage Used: 0% 00:14:59.640 Data Units Read: 0 00:14:59.640 Data Units Written: 0 00:14:59.640 Host Read Commands: 0 00:14:59.640 Host Write Commands: 0 00:14:59.640 Controller Busy Time: 0 minutes 00:14:59.640 Power Cycles: 0 00:14:59.640 Power On Hours: 0 hours 00:14:59.640 Unsafe Shutdowns: 0 00:14:59.640 Unrecoverable Media Errors: 0 00:14:59.640 Lifetime Error Log Entries: 0 00:14:59.640 Warning Temperature Time: 0 minutes 00:14:59.640 Critical Temperature Time: 0 minutes 00:14:59.640 00:14:59.640 Number of Queues 00:14:59.640 ================ 00:14:59.640 Number of I/O Submission Queues: 127 00:14:59.640 Number of I/O Completion Queues: 127 00:14:59.640 00:14:59.640 Active Namespaces 00:14:59.640 ================= 00:14:59.640 Namespace ID:1 00:14:59.640 Error Recovery Timeout: Unlimited 00:14:59.640 Command Set Identifier: NVM (00h) 00:14:59.640 Deallocate: Supported 00:14:59.640 Deallocated/Unwritten Error: Not Supported 00:14:59.640 Deallocated Read Value: Unknown 00:14:59.640 Deallocate in Write Zeroes: Not Supported 00:14:59.640 Deallocated Guard Field: 0xFFFF 00:14:59.640 Flush: Supported 00:14:59.640 Reservation: Supported 00:14:59.640 Namespace Sharing Capabilities: Multiple Controllers 00:14:59.640 Size (in LBAs): 131072 (0GiB) 00:14:59.640 Capacity (in LBAs): 131072 (0GiB) 00:14:59.640 Utilization (in LBAs): 131072 (0GiB) 00:14:59.640 NGUID: 14A993D5519D4C9ABD18300B4185DE49 00:14:59.640 UUID: 14a993d5-519d-4c9a-bd18-300b4185de49 00:14:59.640 Thin Provisioning: Not Supported 00:14:59.640 Per-NS Atomic Units: Yes 00:14:59.640 Atomic Boundary Size (Normal): 0 00:14:59.640 Atomic Boundary Size (PFail): 0 00:14:59.640 Atomic Boundary Offset: 0 00:14:59.640 Maximum Single Source Range Length: 65535 00:14:59.640 Maximum Copy Length: 65535 00:14:59.640 Maximum Source Range Count: 1 00:14:59.640 NGUID/EUI64 Never Reused: No 00:14:59.640 Namespace Write Protected: No 00:14:59.640 Number of LBA Formats: 1 00:14:59.640 Current LBA Format: LBA Format #00 00:14:59.640 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:59.640 00:14:59.640 19:05:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:59.898 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.898 [2024-07-25 19:05:52.287905] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:05.162 Initializing NVMe Controllers 00:15:05.162 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:05.162 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:05.162 Initialization complete. Launching workers. 00:15:05.162 ======================================================== 00:15:05.162 Latency(us) 00:15:05.162 Device Information : IOPS MiB/s Average min max 00:15:05.162 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33627.39 131.36 3805.99 1187.90 9623.94 00:15:05.162 ======================================================== 00:15:05.162 Total : 33627.39 131.36 3805.99 1187.90 9623.94 00:15:05.162 00:15:05.162 [2024-07-25 19:05:57.388482] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:05.162 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:05.162 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.420 [2024-07-25 19:05:57.641192] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:10.689 Initializing NVMe Controllers 00:15:10.689 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:10.689 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:10.689 Initialization complete. Launching workers. 00:15:10.689 ======================================================== 00:15:10.689 Latency(us) 00:15:10.689 Device Information : IOPS MiB/s Average min max 00:15:10.689 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31374.40 122.56 4080.95 1215.13 9701.76 00:15:10.689 ======================================================== 00:15:10.689 Total : 31374.40 122.56 4080.95 1215.13 9701.76 00:15:10.689 00:15:10.689 [2024-07-25 19:06:02.662414] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:10.689 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:10.689 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.689 [2024-07-25 19:06:02.890544] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:15.952 [2024-07-25 19:06:08.039257] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:15.952 Initializing NVMe Controllers 00:15:15.952 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:15.952 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:15.952 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:15.952 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:15.952 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:15.952 Initialization complete. Launching workers. 00:15:15.952 Starting thread on core 2 00:15:15.952 Starting thread on core 3 00:15:15.952 Starting thread on core 1 00:15:15.952 19:06:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:15.952 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.952 [2024-07-25 19:06:08.370594] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:19.237 [2024-07-25 19:06:11.430363] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:19.237 Initializing NVMe Controllers 00:15:19.237 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:19.237 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:19.237 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:19.237 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:19.237 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:19.237 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:19.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:19.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:19.237 Initialization complete. Launching workers. 00:15:19.237 Starting thread on core 1 with urgent priority queue 00:15:19.237 Starting thread on core 2 with urgent priority queue 00:15:19.237 Starting thread on core 3 with urgent priority queue 00:15:19.237 Starting thread on core 0 with urgent priority queue 00:15:19.237 SPDK bdev Controller (SPDK2 ) core 0: 5376.00 IO/s 18.60 secs/100000 ios 00:15:19.237 SPDK bdev Controller (SPDK2 ) core 1: 5070.33 IO/s 19.72 secs/100000 ios 00:15:19.237 SPDK bdev Controller (SPDK2 ) core 2: 6003.67 IO/s 16.66 secs/100000 ios 00:15:19.237 SPDK bdev Controller (SPDK2 ) core 3: 5946.00 IO/s 16.82 secs/100000 ios 00:15:19.237 ======================================================== 00:15:19.237 00:15:19.237 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:19.237 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.494 [2024-07-25 19:06:11.747606] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:19.494 Initializing NVMe Controllers 00:15:19.494 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:19.494 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:19.494 Namespace ID: 1 size: 0GB 00:15:19.494 Initialization complete. 00:15:19.494 INFO: using host memory buffer for IO 00:15:19.494 Hello world! 00:15:19.494 [2024-07-25 19:06:11.759683] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:19.494 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:19.494 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.751 [2024-07-25 19:06:12.071146] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:21.122 Initializing NVMe Controllers 00:15:21.122 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:21.122 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:21.122 Initialization complete. Launching workers. 00:15:21.122 submit (in ns) avg, min, max = 7891.2, 3554.4, 4003707.8 00:15:21.122 complete (in ns) avg, min, max = 24123.8, 2060.0, 4997582.2 00:15:21.122 00:15:21.122 Submit histogram 00:15:21.122 ================ 00:15:21.122 Range in us Cumulative Count 00:15:21.122 3.532 - 3.556: 0.0076% ( 1) 00:15:21.122 3.556 - 3.579: 0.0379% ( 4) 00:15:21.122 3.579 - 3.603: 1.8589% ( 240) 00:15:21.122 3.603 - 3.627: 7.2838% ( 715) 00:15:21.122 3.627 - 3.650: 18.5812% ( 1489) 00:15:21.122 3.650 - 3.674: 28.8998% ( 1360) 00:15:21.122 3.674 - 3.698: 37.6100% ( 1148) 00:15:21.122 3.698 - 3.721: 45.1593% ( 995) 00:15:21.122 3.721 - 3.745: 52.0182% ( 904) 00:15:21.122 3.745 - 3.769: 58.2018% ( 815) 00:15:21.122 3.769 - 3.793: 63.0501% ( 639) 00:15:21.122 3.793 - 3.816: 66.7602% ( 489) 00:15:21.122 3.816 - 3.840: 69.6889% ( 386) 00:15:21.122 3.840 - 3.864: 73.0653% ( 445) 00:15:21.122 3.864 - 3.887: 77.2003% ( 545) 00:15:21.122 3.887 - 3.911: 81.2595% ( 535) 00:15:21.122 3.911 - 3.935: 84.3930% ( 413) 00:15:21.122 3.935 - 3.959: 86.6464% ( 297) 00:15:21.122 3.959 - 3.982: 88.5964% ( 257) 00:15:21.122 3.982 - 4.006: 90.4704% ( 247) 00:15:21.122 4.006 - 4.030: 91.8968% ( 188) 00:15:21.122 4.030 - 4.053: 93.1715% ( 168) 00:15:21.122 4.053 - 4.077: 94.0971% ( 122) 00:15:21.122 4.077 - 4.101: 94.8558% ( 100) 00:15:21.122 4.101 - 4.124: 95.5311% ( 89) 00:15:21.122 4.124 - 4.148: 96.0015% ( 62) 00:15:21.122 4.148 - 4.172: 96.3126% ( 41) 00:15:21.122 4.172 - 4.196: 96.5402% ( 30) 00:15:21.122 4.196 - 4.219: 96.8209% ( 37) 00:15:21.122 4.219 - 4.243: 96.9196% ( 13) 00:15:21.122 4.243 - 4.267: 97.0258% ( 14) 00:15:21.122 4.267 - 4.290: 97.1472% ( 16) 00:15:21.122 4.290 - 4.314: 97.2382% ( 12) 00:15:21.122 4.314 - 4.338: 97.3293% ( 12) 00:15:21.122 4.338 - 4.361: 97.4052% ( 10) 00:15:21.122 4.361 - 4.385: 97.4659% ( 8) 00:15:21.122 4.385 - 4.409: 97.5038% ( 5) 00:15:21.122 4.433 - 4.456: 97.5266% ( 3) 00:15:21.122 4.456 - 4.480: 97.5417% ( 2) 00:15:21.122 4.480 - 4.504: 97.5569% ( 2) 00:15:21.122 4.504 - 4.527: 97.5645% ( 1) 00:15:21.122 4.527 - 4.551: 97.5721% ( 1) 00:15:21.122 4.551 - 4.575: 97.6024% ( 4) 00:15:21.122 4.575 - 4.599: 97.6100% ( 1) 00:15:21.122 4.599 - 4.622: 97.6176% ( 1) 00:15:21.122 4.670 - 4.693: 97.6252% ( 1) 00:15:21.122 4.717 - 4.741: 97.6328% ( 1) 00:15:21.122 4.741 - 4.764: 97.6404% ( 1) 00:15:21.122 4.764 - 4.788: 97.6480% ( 1) 00:15:21.122 4.788 - 4.812: 97.6555% ( 1) 00:15:21.122 4.812 - 4.836: 97.6707% ( 2) 00:15:21.122 4.836 - 4.859: 97.6859% ( 2) 00:15:21.122 4.859 - 4.883: 97.7238% ( 5) 00:15:21.122 4.883 - 4.907: 97.7769% ( 7) 00:15:21.122 4.907 - 4.930: 97.9059% ( 17) 00:15:21.122 4.930 - 4.954: 97.9287% ( 3) 00:15:21.122 4.954 - 4.978: 98.0273% ( 13) 00:15:21.122 4.978 - 5.001: 98.0577% ( 4) 00:15:21.122 5.001 - 5.025: 98.1108% ( 7) 00:15:21.122 5.025 - 5.049: 98.1791% ( 9) 00:15:21.122 5.049 - 5.073: 98.2322% ( 7) 00:15:21.122 5.073 - 5.096: 98.2473% ( 2) 00:15:21.122 5.096 - 5.120: 98.2701% ( 3) 00:15:21.122 5.120 - 5.144: 98.2777% ( 1) 00:15:21.122 5.144 - 5.167: 98.2929% ( 2) 00:15:21.122 5.167 - 5.191: 98.3232% ( 4) 00:15:21.122 5.191 - 5.215: 98.3991% ( 10) 00:15:21.122 5.215 - 5.239: 98.4143% ( 2) 00:15:21.122 5.262 - 5.286: 98.4294% ( 2) 00:15:21.122 5.286 - 5.310: 98.4446% ( 2) 00:15:21.122 5.310 - 5.333: 98.4598% ( 2) 00:15:21.122 5.357 - 5.381: 98.4674% ( 1) 00:15:21.122 5.523 - 5.547: 98.4750% ( 1) 00:15:21.122 5.570 - 5.594: 98.4825% ( 1) 00:15:21.122 5.618 - 5.641: 98.4901% ( 1) 00:15:21.122 5.760 - 5.784: 98.4977% ( 1) 00:15:21.122 5.997 - 6.021: 98.5053% ( 1) 00:15:21.122 6.163 - 6.210: 98.5129% ( 1) 00:15:21.122 6.827 - 6.874: 98.5281% ( 2) 00:15:21.122 6.874 - 6.921: 98.5357% ( 1) 00:15:21.122 6.969 - 7.016: 98.5432% ( 1) 00:15:21.122 7.016 - 7.064: 98.5508% ( 1) 00:15:21.122 7.064 - 7.111: 98.5660% ( 2) 00:15:21.122 7.111 - 7.159: 98.5812% ( 2) 00:15:21.122 7.159 - 7.206: 98.5888% ( 1) 00:15:21.122 7.253 - 7.301: 98.5964% ( 1) 00:15:21.122 7.443 - 7.490: 98.6039% ( 1) 00:15:21.122 7.633 - 7.680: 98.6191% ( 2) 00:15:21.122 7.680 - 7.727: 98.6267% ( 1) 00:15:21.122 7.775 - 7.822: 98.6419% ( 2) 00:15:21.122 7.822 - 7.870: 98.6571% ( 2) 00:15:21.122 7.870 - 7.917: 98.6646% ( 1) 00:15:21.122 8.012 - 8.059: 98.6722% ( 1) 00:15:21.122 8.059 - 8.107: 98.6798% ( 1) 00:15:21.122 8.107 - 8.154: 98.6950% ( 2) 00:15:21.122 8.249 - 8.296: 98.7026% ( 1) 00:15:21.122 8.296 - 8.344: 98.7102% ( 1) 00:15:21.122 8.391 - 8.439: 98.7178% ( 1) 00:15:21.122 8.439 - 8.486: 98.7329% ( 2) 00:15:21.122 8.533 - 8.581: 98.7481% ( 2) 00:15:21.122 8.628 - 8.676: 98.7557% ( 1) 00:15:21.122 8.676 - 8.723: 98.7633% ( 1) 00:15:21.122 8.770 - 8.818: 98.7709% ( 1) 00:15:21.122 8.818 - 8.865: 98.7860% ( 2) 00:15:21.122 8.865 - 8.913: 98.7936% ( 1) 00:15:21.122 8.960 - 9.007: 98.8012% ( 1) 00:15:21.122 9.007 - 9.055: 98.8088% ( 1) 00:15:21.122 9.055 - 9.102: 98.8164% ( 1) 00:15:21.122 9.908 - 9.956: 98.8316% ( 2) 00:15:21.122 10.145 - 10.193: 98.8392% ( 1) 00:15:21.122 10.382 - 10.430: 98.8467% ( 1) 00:15:21.122 10.430 - 10.477: 98.8543% ( 1) 00:15:21.122 10.524 - 10.572: 98.8619% ( 1) 00:15:21.122 10.619 - 10.667: 98.8695% ( 1) 00:15:21.122 10.951 - 10.999: 98.8771% ( 1) 00:15:21.122 11.615 - 11.662: 98.8847% ( 1) 00:15:21.122 11.757 - 11.804: 98.8923% ( 1) 00:15:21.122 11.899 - 11.947: 98.8998% ( 1) 00:15:21.122 12.136 - 12.231: 98.9226% ( 3) 00:15:21.122 12.705 - 12.800: 98.9302% ( 1) 00:15:21.122 14.033 - 14.127: 98.9378% ( 1) 00:15:21.122 14.317 - 14.412: 98.9454% ( 1) 00:15:21.122 14.791 - 14.886: 98.9530% ( 1) 00:15:21.122 15.360 - 15.455: 98.9605% ( 1) 00:15:21.122 17.067 - 17.161: 98.9681% ( 1) 00:15:21.122 17.256 - 17.351: 98.9757% ( 1) 00:15:21.122 17.351 - 17.446: 98.9985% ( 3) 00:15:21.122 17.446 - 17.541: 99.0212% ( 3) 00:15:21.122 17.541 - 17.636: 99.0364% ( 2) 00:15:21.122 17.636 - 17.730: 99.0971% ( 8) 00:15:21.122 17.730 - 17.825: 99.1654% ( 9) 00:15:21.122 17.825 - 17.920: 99.2261% ( 8) 00:15:21.122 17.920 - 18.015: 99.3475% ( 16) 00:15:21.122 18.015 - 18.110: 99.3854% ( 5) 00:15:21.122 18.110 - 18.204: 99.3930% ( 1) 00:15:21.122 18.204 - 18.299: 99.4613% ( 9) 00:15:21.122 18.299 - 18.394: 99.5220% ( 8) 00:15:21.122 18.394 - 18.489: 99.5827% ( 8) 00:15:21.122 18.489 - 18.584: 99.6586% ( 10) 00:15:21.122 18.584 - 18.679: 99.7117% ( 7) 00:15:21.122 18.679 - 18.773: 99.7420% ( 4) 00:15:21.122 18.773 - 18.868: 99.7800% ( 5) 00:15:21.122 18.868 - 18.963: 99.8027% ( 3) 00:15:21.123 18.963 - 19.058: 99.8331% ( 4) 00:15:21.123 19.342 - 19.437: 99.8634% ( 4) 00:15:21.123 19.816 - 19.911: 99.8710% ( 1) 00:15:21.123 19.911 - 20.006: 99.8786% ( 1) 00:15:21.123 28.255 - 28.444: 99.8862% ( 1) 00:15:21.123 29.203 - 29.393: 99.8938% ( 1) 00:15:21.123 29.772 - 29.961: 99.9014% ( 1) 00:15:21.123 3980.705 - 4004.978: 100.0000% ( 13) 00:15:21.123 00:15:21.123 Complete histogram 00:15:21.123 ================== 00:15:21.123 Range in us Cumulative Count 00:15:21.123 2.050 - 2.062: 0.0228% ( 3) 00:15:21.123 2.062 - 2.074: 9.2489% ( 1216) 00:15:21.123 2.074 - 2.086: 36.8437% ( 3637) 00:15:21.123 2.086 - 2.098: 39.6282% ( 367) 00:15:21.123 2.098 - 2.110: 50.1138% ( 1382) 00:15:21.123 2.110 - 2.121: 59.6737% ( 1260) 00:15:21.123 2.121 - 2.133: 61.4112% ( 229) 00:15:21.123 2.133 - 2.145: 69.2564% ( 1034) 00:15:21.123 2.145 - 2.157: 76.5781% ( 965) 00:15:21.123 2.157 - 2.169: 77.8680% ( 170) 00:15:21.123 2.169 - 2.181: 83.5205% ( 745) 00:15:21.123 2.181 - 2.193: 87.4279% ( 515) 00:15:21.123 2.193 - 2.204: 88.2853% ( 113) 00:15:21.123 2.204 - 2.216: 89.6131% ( 175) 00:15:21.123 2.216 - 2.228: 91.9879% ( 313) 00:15:21.123 2.228 - 2.240: 93.3384% ( 178) 00:15:21.123 2.240 - 2.252: 94.4082% ( 141) 00:15:21.123 2.252 - 2.264: 95.0303% ( 82) 00:15:21.123 2.264 - 2.276: 95.1745% ( 19) 00:15:21.123 2.276 - 2.287: 95.3566% ( 24) 00:15:21.123 2.287 - 2.299: 95.7056% ( 46) 00:15:21.123 2.299 - 2.311: 96.0470% ( 45) 00:15:21.123 2.311 - 2.323: 96.1305% ( 11) 00:15:21.123 2.323 - 2.335: 96.1457% ( 2) 00:15:21.123 2.335 - 2.347: 96.3126% ( 22) 00:15:21.123 2.347 - 2.359: 96.5023% ( 25) 00:15:21.123 2.359 - 2.370: 96.7982% ( 39) 00:15:21.123 2.370 - 2.382: 97.1017% ( 40) 00:15:21.123 2.382 - 2.394: 97.3748% ( 36) 00:15:21.123 2.394 - 2.406: 97.6100% ( 31) 00:15:21.123 2.406 - 2.418: 97.7921% ( 24) 00:15:21.123 2.418 - 2.430: 98.0273% ( 31) 00:15:21.123 2.430 - 2.441: 98.2094% ( 24) 00:15:21.123 2.441 - 2.453: 98.3460% ( 18) 00:15:21.123 2.453 - 2.465: 98.3915% ( 6) 00:15:21.123 2.465 - 2.477: 98.4294% ( 5) 00:15:21.123 2.477 - 2.489: 98.4446% ( 2) 00:15:21.123 2.489 - 2.501: 98.4674% ( 3) 00:15:21.123 2.501 - 2.513: 98.4825% ( 2) 00:15:21.123 2.524 - 2.536: 98.4901% ( 1) 00:15:21.123 2.536 - 2.548: 98.4977% ( 1) 00:15:21.123 2.560 - 2.572: 98.5053% ( 1) 00:15:21.123 2.596 - 2.607: 98.5129% ( 1) 00:15:21.123 2.619 - 2.631: 98.5205% ( 1) 00:15:21.123 2.690 - 2.702: 98.5281% ( 1) 00:15:21.123 2.714 - 2.726: 98.5357% ( 1) 00:15:21.123 2.761 - 2.773: 98.5508% ( 2) 00:15:21.123 2.773 - 2.785: 98.5584% ( 1) 00:15:21.123 2.856 - 2.868: 98.5660% ( 1) 00:15:21.123 2.975 - 2.987: 98.5736% ( 1) 00:15:21.123 3.247 - 3.271: 98.5812% ( 1) 00:15:21.123 3.271 - 3.295: 98.6191% ( 5) 00:15:21.123 3.295 - 3.319: 98.6267% ( 1) 00:15:21.123 3.319 - 3.342: 98.6419% ( 2) 00:15:21.123 3.342 - 3.366: 98.6571% ( 2) 00:15:21.123 3.366 - 3.390: 98.6722% ( 2) 00:15:21.123 3.390 - 3.413: 98.6874% ( 2) 00:15:21.123 3.413 - 3.437: 98.7102% ( 3) 00:15:21.123 3.437 - 3.461: 98.7178% ( 1) 00:15:21.123 3.461 - 3.484: 98.7329% ( 2) 00:15:21.123 3.484 - 3.508: 98.7405% ( 1) 00:15:21.123 3.508 - 3.532: 98.7557% ( 2) 00:15:21.123 3.603 - 3.627: 98.7709% ( 2) 00:15:21.123 3.627 - 3.650: 98.7785% ( 1) 00:15:21.123 3.650 - 3.674: 98.7860% ( 1) 00:15:21.123 3.674 - 3.698: 98.7936% ( 1) 00:15:21.123 3.698 - 3.721: 98.8088% ( 2) 00:15:21.123 3.864 - 3.887: 98.8164% ( 1) 00:15:21.123 3.982 - 4.006: 98.8240% ( 1) 00:15:21.123 4.030 - 4.053: 98.8316% ( 1) 00:15:21.123 4.053 - 4.077: 98.8392% ( 1) 00:15:21.123 5.286 - 5.310: 98.8467% ( 1) 00:15:21.123 5.499 - 5.523: 98.8543% ( 1) 00:15:21.123 5.641 - 5.665: 98.8619% ( 1) 00:15:21.123 5.665 - 5.689: 98.8771% ( 2) 00:15:21.123 5.736 - 5.760: 98.8847% ( 1) 00:15:21.123 5.760 - 5.784: 98.8923% ( 1) 00:15:21.123 5.831 - 5.855: 98.9074% ( 2) 00:15:21.123 6.044 - 6.068: 98.9150% ( 1) 00:15:21.123 6.210 - 6.258: 98.9226% ( 1) 00:15:21.123 6.542 - 6.590: 98.9302% ( 1) 00:15:21.123 6.779 - 6.827: 98.9454% ( 2) 00:15:21.123 6.874 - 6.921: 98.9530% ( 1) 00:15:21.123 7.159 - 7.206: 98.9605% ( 1) 00:15:21.123 7.727 - 7.775: 98.9681% ( 1) 00:15:21.123 8.012 - 8.059: 98.9757% ( 1) 00:15:21.123 10.287 - 10.335: 98.9833% ( 1) 00:15:21.123 15.644 - 15.739: 98.9909% ( 1) 00:15:21.123 15.739 - 15.834: 98.9985% ( 1) 00:15:21.123 15.834 - 15.929: 99.0288% ( 4) 00:15:21.123 15.929 - 16.024: 99.0668% ( 5) 00:15:21.123 16.024 - 16.119: 99.0895% ( 3) 00:15:21.123 16.119 - 16.213: 99.0971% ( 1) 00:15:21.123 16.213 - 16.308: 99.1123% ( 2) 00:15:21.123 16.308 - 16.403: 99.1654% ( 7) 00:15:21.123 16.403 - 16.498: 99.2109% ( 6) 00:15:21.123 16.498 - 16.593: 99.2185% ( 1) 00:15:21.123 16.593 - 16.687: 99.2489% ( 4) 00:15:21.123 16.687 - 16.782: 99.3096% ( 8) 00:15:21.123 16.782 - 16.877: 99.3399% ( 4) 00:15:21.123 16.877 - 16.972: 9[2024-07-25 19:06:13.164789] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:21.123 9.3551% ( 2) 00:15:21.123 17.067 - 17.161: 99.3627% ( 1) 00:15:21.123 17.161 - 17.256: 99.3854% ( 3) 00:15:21.123 17.256 - 17.351: 99.4006% ( 2) 00:15:21.123 17.351 - 17.446: 99.4082% ( 1) 00:15:21.123 17.446 - 17.541: 99.4158% ( 1) 00:15:21.123 17.636 - 17.730: 99.4234% ( 1) 00:15:21.123 17.920 - 18.015: 99.4310% ( 1) 00:15:21.123 18.204 - 18.299: 99.4385% ( 1) 00:15:21.123 18.394 - 18.489: 99.4461% ( 1) 00:15:21.123 21.049 - 21.144: 99.4537% ( 1) 00:15:21.123 3009.801 - 3021.938: 99.4613% ( 1) 00:15:21.123 3980.705 - 4004.978: 99.9469% ( 64) 00:15:21.123 4004.978 - 4029.250: 99.9848% ( 5) 00:15:21.123 4975.881 - 5000.154: 100.0000% ( 2) 00:15:21.123 00:15:21.123 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:21.123 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:21.123 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:21.123 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:21.123 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:21.123 [ 00:15:21.123 { 00:15:21.123 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:21.123 "subtype": "Discovery", 00:15:21.123 "listen_addresses": [], 00:15:21.123 "allow_any_host": true, 00:15:21.123 "hosts": [] 00:15:21.123 }, 00:15:21.123 { 00:15:21.123 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:21.123 "subtype": "NVMe", 00:15:21.123 "listen_addresses": [ 00:15:21.123 { 00:15:21.123 "trtype": "VFIOUSER", 00:15:21.123 "adrfam": "IPv4", 00:15:21.123 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:21.123 "trsvcid": "0" 00:15:21.123 } 00:15:21.123 ], 00:15:21.123 "allow_any_host": true, 00:15:21.123 "hosts": [], 00:15:21.123 "serial_number": "SPDK1", 00:15:21.123 "model_number": "SPDK bdev Controller", 00:15:21.123 "max_namespaces": 32, 00:15:21.123 "min_cntlid": 1, 00:15:21.123 "max_cntlid": 65519, 00:15:21.123 "namespaces": [ 00:15:21.123 { 00:15:21.123 "nsid": 1, 00:15:21.123 "bdev_name": "Malloc1", 00:15:21.123 "name": "Malloc1", 00:15:21.123 "nguid": "AE6DA195A80B43CF99D62A6CC18A94B8", 00:15:21.123 "uuid": "ae6da195-a80b-43cf-99d6-2a6cc18a94b8" 00:15:21.123 }, 00:15:21.123 { 00:15:21.123 "nsid": 2, 00:15:21.123 "bdev_name": "Malloc3", 00:15:21.123 "name": "Malloc3", 00:15:21.123 "nguid": "6EFBE223924D462991D2ACFD7561C55C", 00:15:21.123 "uuid": "6efbe223-924d-4629-91d2-acfd7561c55c" 00:15:21.123 } 00:15:21.123 ] 00:15:21.123 }, 00:15:21.123 { 00:15:21.123 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:21.123 "subtype": "NVMe", 00:15:21.124 "listen_addresses": [ 00:15:21.124 { 00:15:21.124 "trtype": "VFIOUSER", 00:15:21.124 "adrfam": "IPv4", 00:15:21.124 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:21.124 "trsvcid": "0" 00:15:21.124 } 00:15:21.124 ], 00:15:21.124 "allow_any_host": true, 00:15:21.124 "hosts": [], 00:15:21.124 "serial_number": "SPDK2", 00:15:21.124 "model_number": "SPDK bdev Controller", 00:15:21.124 "max_namespaces": 32, 00:15:21.124 "min_cntlid": 1, 00:15:21.124 "max_cntlid": 65519, 00:15:21.124 "namespaces": [ 00:15:21.124 { 00:15:21.124 "nsid": 1, 00:15:21.124 "bdev_name": "Malloc2", 00:15:21.124 "name": "Malloc2", 00:15:21.124 "nguid": "14A993D5519D4C9ABD18300B4185DE49", 00:15:21.124 "uuid": "14a993d5-519d-4c9a-bd18-300b4185de49" 00:15:21.124 } 00:15:21.124 ] 00:15:21.124 } 00:15:21.124 ] 00:15:21.124 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:21.124 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=886698 00:15:21.124 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:21.124 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:21.124 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:21.124 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:21.124 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:21.124 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:21.124 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:21.124 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:21.124 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.381 [2024-07-25 19:06:13.678594] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:21.381 Malloc4 00:15:21.381 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:21.638 [2024-07-25 19:06:14.026182] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:21.638 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:21.638 Asynchronous Event Request test 00:15:21.638 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:21.638 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:21.638 Registering asynchronous event callbacks... 00:15:21.638 Starting namespace attribute notice tests for all controllers... 00:15:21.638 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:21.638 aer_cb - Changed Namespace 00:15:21.638 Cleaning up... 00:15:21.895 [ 00:15:21.895 { 00:15:21.895 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:21.895 "subtype": "Discovery", 00:15:21.895 "listen_addresses": [], 00:15:21.895 "allow_any_host": true, 00:15:21.895 "hosts": [] 00:15:21.895 }, 00:15:21.895 { 00:15:21.895 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:21.895 "subtype": "NVMe", 00:15:21.895 "listen_addresses": [ 00:15:21.895 { 00:15:21.895 "trtype": "VFIOUSER", 00:15:21.895 "adrfam": "IPv4", 00:15:21.895 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:21.895 "trsvcid": "0" 00:15:21.895 } 00:15:21.895 ], 00:15:21.895 "allow_any_host": true, 00:15:21.895 "hosts": [], 00:15:21.895 "serial_number": "SPDK1", 00:15:21.895 "model_number": "SPDK bdev Controller", 00:15:21.895 "max_namespaces": 32, 00:15:21.895 "min_cntlid": 1, 00:15:21.895 "max_cntlid": 65519, 00:15:21.895 "namespaces": [ 00:15:21.895 { 00:15:21.895 "nsid": 1, 00:15:21.895 "bdev_name": "Malloc1", 00:15:21.895 "name": "Malloc1", 00:15:21.895 "nguid": "AE6DA195A80B43CF99D62A6CC18A94B8", 00:15:21.895 "uuid": "ae6da195-a80b-43cf-99d6-2a6cc18a94b8" 00:15:21.895 }, 00:15:21.895 { 00:15:21.895 "nsid": 2, 00:15:21.895 "bdev_name": "Malloc3", 00:15:21.895 "name": "Malloc3", 00:15:21.895 "nguid": "6EFBE223924D462991D2ACFD7561C55C", 00:15:21.895 "uuid": "6efbe223-924d-4629-91d2-acfd7561c55c" 00:15:21.895 } 00:15:21.895 ] 00:15:21.895 }, 00:15:21.895 { 00:15:21.895 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:21.895 "subtype": "NVMe", 00:15:21.895 "listen_addresses": [ 00:15:21.896 { 00:15:21.896 "trtype": "VFIOUSER", 00:15:21.896 "adrfam": "IPv4", 00:15:21.896 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:21.896 "trsvcid": "0" 00:15:21.896 } 00:15:21.896 ], 00:15:21.896 "allow_any_host": true, 00:15:21.896 "hosts": [], 00:15:21.896 "serial_number": "SPDK2", 00:15:21.896 "model_number": "SPDK bdev Controller", 00:15:21.896 "max_namespaces": 32, 00:15:21.896 "min_cntlid": 1, 00:15:21.896 "max_cntlid": 65519, 00:15:21.896 "namespaces": [ 00:15:21.896 { 00:15:21.896 "nsid": 1, 00:15:21.896 "bdev_name": "Malloc2", 00:15:21.896 "name": "Malloc2", 00:15:21.896 "nguid": "14A993D5519D4C9ABD18300B4185DE49", 00:15:21.896 "uuid": "14a993d5-519d-4c9a-bd18-300b4185de49" 00:15:21.896 }, 00:15:21.896 { 00:15:21.896 "nsid": 2, 00:15:21.896 "bdev_name": "Malloc4", 00:15:21.896 "name": "Malloc4", 00:15:21.896 "nguid": "2E763C3C6F8345CC9B952D63E84354F7", 00:15:21.896 "uuid": "2e763c3c-6f83-45cc-9b95-2d63e84354f7" 00:15:21.896 } 00:15:21.896 ] 00:15:21.896 } 00:15:21.896 ] 00:15:21.896 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 886698 00:15:21.896 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:21.896 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 880474 00:15:21.896 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 880474 ']' 00:15:21.896 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 880474 00:15:21.896 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:21.896 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:21.896 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 880474 00:15:21.896 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:21.896 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:21.896 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 880474' 00:15:21.896 killing process with pid 880474 00:15:21.896 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 880474 00:15:21.896 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 880474 00:15:22.462 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:22.462 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:22.462 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:22.462 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:22.462 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:22.462 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=886842 00:15:22.462 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:22.462 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 886842' 00:15:22.462 Process pid: 886842 00:15:22.462 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:22.462 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 886842 00:15:22.462 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 886842 ']' 00:15:22.462 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.462 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:22.462 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.462 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:22.462 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:22.462 [2024-07-25 19:06:14.753035] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:22.462 [2024-07-25 19:06:14.754074] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:22.462 [2024-07-25 19:06:14.754166] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.462 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.462 [2024-07-25 19:06:14.826465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:22.752 [2024-07-25 19:06:14.945804] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.752 [2024-07-25 19:06:14.945854] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.752 [2024-07-25 19:06:14.945870] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:22.752 [2024-07-25 19:06:14.945883] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:22.752 [2024-07-25 19:06:14.945894] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.752 [2024-07-25 19:06:14.945960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.752 [2024-07-25 19:06:14.946030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.752 [2024-07-25 19:06:14.946074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:22.752 [2024-07-25 19:06:14.946077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.752 [2024-07-25 19:06:15.056642] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:22.753 [2024-07-25 19:06:15.056852] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:22.753 [2024-07-25 19:06:15.057168] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:22.753 [2024-07-25 19:06:15.057834] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:22.753 [2024-07-25 19:06:15.058071] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:23.316 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:23.316 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:23.316 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:24.246 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:24.811 19:06:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:24.811 19:06:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:24.811 19:06:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:24.811 19:06:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:24.811 19:06:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:24.811 Malloc1 00:15:24.811 19:06:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:25.069 19:06:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:25.326 19:06:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:25.583 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:25.583 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:25.583 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:25.841 Malloc2 00:15:25.841 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:26.099 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:26.356 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:26.614 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:26.614 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 886842 00:15:26.614 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 886842 ']' 00:15:26.614 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 886842 00:15:26.614 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:26.614 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:26.614 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 886842 00:15:26.614 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:26.614 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:26.614 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 886842' 00:15:26.614 killing process with pid 886842 00:15:26.614 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 886842 00:15:26.614 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 886842 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:27.181 00:15:27.181 real 0m54.124s 00:15:27.181 user 3m33.637s 00:15:27.181 sys 0m4.681s 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:27.181 ************************************ 00:15:27.181 END TEST nvmf_vfio_user 00:15:27.181 ************************************ 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:27.181 ************************************ 00:15:27.181 START TEST nvmf_vfio_user_nvme_compliance 00:15:27.181 ************************************ 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:27.181 * Looking for test storage... 00:15:27.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.181 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.182 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.182 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:27.182 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:27.182 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:27.182 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:27.182 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:27.182 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:27.182 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:27.182 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:27.182 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=887564 00:15:27.182 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:27.182 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 887564' 00:15:27.182 Process pid: 887564 00:15:27.182 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:27.182 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 887564 00:15:27.182 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 887564 ']' 00:15:27.182 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.182 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:27.182 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.182 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:27.182 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:27.182 [2024-07-25 19:06:19.554978] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:27.182 [2024-07-25 19:06:19.555073] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.182 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.182 [2024-07-25 19:06:19.620851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:27.440 [2024-07-25 19:06:19.728302] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:27.440 [2024-07-25 19:06:19.728360] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:27.440 [2024-07-25 19:06:19.728376] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:27.440 [2024-07-25 19:06:19.728390] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:27.440 [2024-07-25 19:06:19.728401] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:27.440 [2024-07-25 19:06:19.728485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.440 [2024-07-25 19:06:19.728550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.440 [2024-07-25 19:06:19.728554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.440 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:27.440 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:15:27.440 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:28.811 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:28.811 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:28.811 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:28.811 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.811 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:28.811 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.811 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:28.811 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:28.811 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.811 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:28.811 malloc0 00:15:28.811 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.811 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:28.811 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.811 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:28.811 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.811 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:28.811 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.811 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:28.811 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.811 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:28.811 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.811 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:28.811 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.811 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:28.811 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.811 00:15:28.811 00:15:28.811 CUnit - A unit testing framework for C - Version 2.1-3 00:15:28.811 http://cunit.sourceforge.net/ 00:15:28.811 00:15:28.811 00:15:28.811 Suite: nvme_compliance 00:15:28.811 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-25 19:06:21.093619] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.811 [2024-07-25 19:06:21.095030] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:28.811 [2024-07-25 19:06:21.095053] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:28.811 [2024-07-25 19:06:21.095079] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:28.811 [2024-07-25 19:06:21.096636] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.811 passed 00:15:28.811 Test: admin_identify_ctrlr_verify_fused ...[2024-07-25 19:06:21.184261] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.811 [2024-07-25 19:06:21.187282] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.811 passed 00:15:28.811 Test: admin_identify_ns ...[2024-07-25 19:06:21.272666] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.099 [2024-07-25 19:06:21.332135] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:29.099 [2024-07-25 19:06:21.340134] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:29.099 [2024-07-25 19:06:21.361270] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.099 passed 00:15:29.099 Test: admin_get_features_mandatory_features ...[2024-07-25 19:06:21.443323] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.099 [2024-07-25 19:06:21.447347] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.099 passed 00:15:29.099 Test: admin_get_features_optional_features ...[2024-07-25 19:06:21.531887] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.099 [2024-07-25 19:06:21.534907] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.099 passed 00:15:29.357 Test: admin_set_features_number_of_queues ...[2024-07-25 19:06:21.619100] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.357 [2024-07-25 19:06:21.722225] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.357 passed 00:15:29.357 Test: admin_get_log_page_mandatory_logs ...[2024-07-25 19:06:21.808963] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.357 [2024-07-25 19:06:21.811982] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.614 passed 00:15:29.614 Test: admin_get_log_page_with_lpo ...[2024-07-25 19:06:21.893210] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.614 [2024-07-25 19:06:21.961116] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:29.614 [2024-07-25 19:06:21.974203] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.614 passed 00:15:29.614 Test: fabric_property_get ...[2024-07-25 19:06:22.057376] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.614 [2024-07-25 19:06:22.058680] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:29.614 [2024-07-25 19:06:22.060399] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.872 passed 00:15:29.872 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-25 19:06:22.146957] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.872 [2024-07-25 19:06:22.148278] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:29.872 [2024-07-25 19:06:22.149978] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.872 passed 00:15:29.872 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-25 19:06:22.234348] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.872 [2024-07-25 19:06:22.318118] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:29.872 [2024-07-25 19:06:22.334124] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:29.872 [2024-07-25 19:06:22.339258] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.129 passed 00:15:30.129 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-25 19:06:22.423070] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.129 [2024-07-25 19:06:22.424382] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:30.129 [2024-07-25 19:06:22.426114] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.129 passed 00:15:30.129 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-25 19:06:22.507292] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.129 [2024-07-25 19:06:22.587127] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:30.387 [2024-07-25 19:06:22.611115] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:30.387 [2024-07-25 19:06:22.616224] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.387 passed 00:15:30.387 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-25 19:06:22.696898] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.387 [2024-07-25 19:06:22.698224] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:30.387 [2024-07-25 19:06:22.698279] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:30.387 [2024-07-25 19:06:22.701929] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.387 passed 00:15:30.387 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-25 19:06:22.784697] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.645 [2024-07-25 19:06:22.877116] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:30.645 [2024-07-25 19:06:22.885114] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:30.645 [2024-07-25 19:06:22.893116] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:30.645 [2024-07-25 19:06:22.901116] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:30.645 [2024-07-25 19:06:22.930239] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.645 passed 00:15:30.645 Test: admin_create_io_sq_verify_pc ...[2024-07-25 19:06:23.013733] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.645 [2024-07-25 19:06:23.030124] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:30.645 [2024-07-25 19:06:23.048018] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.645 passed 00:15:30.903 Test: admin_create_io_qp_max_qps ...[2024-07-25 19:06:23.132628] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.835 [2024-07-25 19:06:24.230117] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:32.400 [2024-07-25 19:06:24.608668] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:32.400 passed 00:15:32.400 Test: admin_create_io_sq_shared_cq ...[2024-07-25 19:06:24.693626] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:32.400 [2024-07-25 19:06:24.825115] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:32.400 [2024-07-25 19:06:24.862209] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:32.658 passed 00:15:32.658 00:15:32.658 Run Summary: Type Total Ran Passed Failed Inactive 00:15:32.658 suites 1 1 n/a 0 0 00:15:32.658 tests 18 18 18 0 0 00:15:32.658 asserts 360 360 360 0 n/a 00:15:32.658 00:15:32.658 Elapsed time = 1.562 seconds 00:15:32.658 19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 887564 00:15:32.658 19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 887564 ']' 00:15:32.658 19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 887564 00:15:32.658 19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:15:32.658 19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:32.658 19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 887564 00:15:32.658 19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:32.658 19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:32.658 19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 887564' 00:15:32.658 killing process with pid 887564 00:15:32.658 19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 887564 00:15:32.658 19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 887564 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:32.917 00:15:32.917 real 0m5.807s 00:15:32.917 user 0m16.212s 00:15:32.917 sys 0m0.539s 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:32.917 ************************************ 00:15:32.917 END TEST nvmf_vfio_user_nvme_compliance 00:15:32.917 ************************************ 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:32.917 ************************************ 00:15:32.917 START TEST nvmf_vfio_user_fuzz 00:15:32.917 ************************************ 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:32.917 * Looking for test storage... 00:15:32.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.917 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=888302 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 888302' 00:15:32.918 Process pid: 888302 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 888302 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 888302 ']' 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:32.918 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:33.484 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:33.484 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:33.484 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:34.418 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:34.418 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.418 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:34.418 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.418 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:34.418 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:34.418 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.418 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:34.418 malloc0 00:15:34.418 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.418 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:34.418 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.418 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:34.418 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.418 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:34.418 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.418 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:34.418 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.418 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:34.418 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.418 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:34.418 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.418 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:34.418 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:06.482 Fuzzing completed. Shutting down the fuzz application 00:16:06.482 00:16:06.482 Dumping successful admin opcodes: 00:16:06.482 8, 9, 10, 24, 00:16:06.482 Dumping successful io opcodes: 00:16:06.482 0, 00:16:06.482 NS: 0x200003a1ef00 I/O qp, Total commands completed: 595082, total successful commands: 2299, random_seed: 2894069952 00:16:06.482 NS: 0x200003a1ef00 admin qp, Total commands completed: 97205, total successful commands: 788, random_seed: 3351019968 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 888302 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 888302 ']' 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 888302 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 888302 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 888302' 00:16:06.482 killing process with pid 888302 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 888302 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 888302 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:06.482 00:16:06.482 real 0m32.392s 00:16:06.482 user 0m31.569s 00:16:06.482 sys 0m29.020s 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:06.482 ************************************ 00:16:06.482 END TEST nvmf_vfio_user_fuzz 00:16:06.482 ************************************ 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:06.482 ************************************ 00:16:06.482 START TEST nvmf_auth_target 00:16:06.482 ************************************ 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:06.482 * Looking for test storage... 00:16:06.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.482 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:06.483 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:07.858 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:07.858 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:07.858 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:07.859 Found net devices under 0000:09:00.0: cvl_0_0 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:07.859 Found net devices under 0000:09:00.1: cvl_0_1 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:07.859 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:08.117 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:08.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:08.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:16:08.117 00:16:08.117 --- 10.0.0.2 ping statistics --- 00:16:08.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.117 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:16:08.117 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:08.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:08.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:16:08.117 00:16:08.117 --- 10.0.0.1 ping statistics --- 00:16:08.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.117 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:16:08.117 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:08.117 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:08.117 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:08.117 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:08.117 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:08.117 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:08.117 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:08.117 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:08.117 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:08.117 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:08.117 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:08.117 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:08.117 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.117 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=894044 00:16:08.117 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 894044 00:16:08.117 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:08.117 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 894044 ']' 00:16:08.117 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.117 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:08.117 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.117 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:08.117 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.089 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:09.089 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:09.089 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:09.089 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:09.089 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.089 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.089 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=894199 00:16:09.089 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:09.089 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:09.089 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:09.089 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:09.089 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:09.089 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:09.089 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:09.089 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:09.089 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:09.089 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d36f170d318f707dd178896eb54caddc555b87c0efab9117 00:16:09.089 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:09.089 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.OVA 00:16:09.089 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d36f170d318f707dd178896eb54caddc555b87c0efab9117 0 00:16:09.089 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d36f170d318f707dd178896eb54caddc555b87c0efab9117 0 00:16:09.089 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:09.089 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:09.089 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d36f170d318f707dd178896eb54caddc555b87c0efab9117 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.OVA 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.OVA 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.OVA 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=092a099624d3dee94a0610216da630f57c26b501de4f4a85f074e1e239accf4a 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.m0M 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 092a099624d3dee94a0610216da630f57c26b501de4f4a85f074e1e239accf4a 3 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 092a099624d3dee94a0610216da630f57c26b501de4f4a85f074e1e239accf4a 3 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=092a099624d3dee94a0610216da630f57c26b501de4f4a85f074e1e239accf4a 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.m0M 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.m0M 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.m0M 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4e63620aa9182a93e2c4ee64b4af3fbd 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.yfV 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4e63620aa9182a93e2c4ee64b4af3fbd 1 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4e63620aa9182a93e2c4ee64b4af3fbd 1 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4e63620aa9182a93e2c4ee64b4af3fbd 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.yfV 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.yfV 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.yfV 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b9f919fe2bb959d40796794fde4eab6e62fe1ef63f243419 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.vY7 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b9f919fe2bb959d40796794fde4eab6e62fe1ef63f243419 2 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b9f919fe2bb959d40796794fde4eab6e62fe1ef63f243419 2 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b9f919fe2bb959d40796794fde4eab6e62fe1ef63f243419 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:09.090 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.vY7 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.vY7 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.vY7 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dfc7bf51365c337efaa178c920674ce0e8985f0dae2ce0ff 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.oQf 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dfc7bf51365c337efaa178c920674ce0e8985f0dae2ce0ff 2 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dfc7bf51365c337efaa178c920674ce0e8985f0dae2ce0ff 2 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dfc7bf51365c337efaa178c920674ce0e8985f0dae2ce0ff 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.oQf 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.oQf 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.oQf 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4b346326b03c500c7d882b78216a406b 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.BGE 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4b346326b03c500c7d882b78216a406b 1 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4b346326b03c500c7d882b78216a406b 1 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4b346326b03c500c7d882b78216a406b 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.BGE 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.BGE 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.BGE 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e00c86ce9bef7643dc3769160a8634e9f527927d8c38a5fbce53c2c6eb35657d 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.90d 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e00c86ce9bef7643dc3769160a8634e9f527927d8c38a5fbce53c2c6eb35657d 3 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e00c86ce9bef7643dc3769160a8634e9f527927d8c38a5fbce53c2c6eb35657d 3 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e00c86ce9bef7643dc3769160a8634e9f527927d8c38a5fbce53c2c6eb35657d 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.90d 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.90d 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.90d 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 894044 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 894044 ']' 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:09.350 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.609 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:09.609 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:09.609 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 894199 /var/tmp/host.sock 00:16:09.609 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 894199 ']' 00:16:09.609 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:09.609 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:09.609 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:09.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:09.609 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:09.609 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.867 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:09.867 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:09.867 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:09.867 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.867 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.867 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.867 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:09.867 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.OVA 00:16:09.867 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.867 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.867 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.867 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.OVA 00:16:09.867 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.OVA 00:16:10.124 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.m0M ]] 00:16:10.124 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.m0M 00:16:10.124 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.124 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.124 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.124 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.m0M 00:16:10.124 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.m0M 00:16:10.381 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:10.381 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.yfV 00:16:10.381 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.381 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.381 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.381 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.yfV 00:16:10.381 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.yfV 00:16:10.638 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.vY7 ]] 00:16:10.639 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vY7 00:16:10.639 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.639 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.639 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.639 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vY7 00:16:10.639 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vY7 00:16:10.896 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:10.896 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.oQf 00:16:10.896 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.896 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.896 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.896 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.oQf 00:16:10.896 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.oQf 00:16:11.156 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.BGE ]] 00:16:11.156 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BGE 00:16:11.156 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.156 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.156 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.156 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BGE 00:16:11.156 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BGE 00:16:11.415 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:11.415 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.90d 00:16:11.415 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.415 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.415 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.415 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.90d 00:16:11.415 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.90d 00:16:11.673 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:11.673 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:11.673 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:11.673 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.673 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:11.673 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:11.931 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:11.931 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.931 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:11.931 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:11.931 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:11.931 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.931 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.931 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.931 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.931 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.931 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.931 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.189 00:16:12.446 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.446 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.446 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.446 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.705 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.705 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.705 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.705 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.705 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.705 { 00:16:12.705 "cntlid": 1, 00:16:12.705 "qid": 0, 00:16:12.705 "state": "enabled", 00:16:12.705 "thread": "nvmf_tgt_poll_group_000", 00:16:12.705 "listen_address": { 00:16:12.705 "trtype": "TCP", 00:16:12.705 "adrfam": "IPv4", 00:16:12.705 "traddr": "10.0.0.2", 00:16:12.705 "trsvcid": "4420" 00:16:12.705 }, 00:16:12.705 "peer_address": { 00:16:12.705 "trtype": "TCP", 00:16:12.705 "adrfam": "IPv4", 00:16:12.705 "traddr": "10.0.0.1", 00:16:12.705 "trsvcid": "55400" 00:16:12.705 }, 00:16:12.705 "auth": { 00:16:12.705 "state": "completed", 00:16:12.705 "digest": "sha256", 00:16:12.705 "dhgroup": "null" 00:16:12.705 } 00:16:12.705 } 00:16:12.705 ]' 00:16:12.705 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.705 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.705 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.705 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:12.705 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.705 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.705 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.705 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.963 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZDM2ZjE3MGQzMThmNzA3ZGQxNzg4OTZlYjU0Y2FkZGM1NTViODdjMGVmYWI5MTE31jBCEA==: --dhchap-ctrl-secret DHHC-1:03:MDkyYTA5OTYyNGQzZGVlOTRhMDYxMDIxNmRhNjMwZjU3YzI2YjUwMWRlNGY0YTg1ZjA3NGUxZTIzOWFjY2Y0Yb4FMSc=: 00:16:13.897 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.897 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:13.897 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.897 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.897 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.897 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.897 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:13.897 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:14.155 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:14.155 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.155 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:14.155 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:14.155 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:14.155 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.155 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.155 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.155 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.155 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.155 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.155 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.413 00:16:14.413 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.413 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.413 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.671 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.671 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.672 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.672 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.672 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.672 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.672 { 00:16:14.672 "cntlid": 3, 00:16:14.672 "qid": 0, 00:16:14.672 "state": "enabled", 00:16:14.672 "thread": "nvmf_tgt_poll_group_000", 00:16:14.672 "listen_address": { 00:16:14.672 "trtype": "TCP", 00:16:14.672 "adrfam": "IPv4", 00:16:14.672 "traddr": "10.0.0.2", 00:16:14.672 "trsvcid": "4420" 00:16:14.672 }, 00:16:14.672 "peer_address": { 00:16:14.672 "trtype": "TCP", 00:16:14.672 "adrfam": "IPv4", 00:16:14.672 "traddr": "10.0.0.1", 00:16:14.672 "trsvcid": "59044" 00:16:14.672 }, 00:16:14.672 "auth": { 00:16:14.672 "state": "completed", 00:16:14.672 "digest": "sha256", 00:16:14.672 "dhgroup": "null" 00:16:14.672 } 00:16:14.672 } 00:16:14.672 ]' 00:16:14.672 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.672 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.672 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.929 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:14.929 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.929 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.929 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.929 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.187 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGU2MzYyMGFhOTE4MmE5M2UyYzRlZTY0YjRhZjNmYmSEPhBf: --dhchap-ctrl-secret DHHC-1:02:YjlmOTE5ZmUyYmI5NTlkNDA3OTY3OTRmZGU0ZWFiNmU2MmZlMWVmNjNmMjQzNDE596lAIA==: 00:16:16.120 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.120 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:16.120 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.120 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.120 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.120 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.120 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:16.120 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:16.378 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:16.378 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.378 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:16.378 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:16.378 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:16.378 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.378 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.378 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.378 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.378 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.378 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.378 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.636 00:16:16.636 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.636 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.636 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.894 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.894 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.894 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.894 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.894 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.894 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.894 { 00:16:16.894 "cntlid": 5, 00:16:16.894 "qid": 0, 00:16:16.894 "state": "enabled", 00:16:16.894 "thread": "nvmf_tgt_poll_group_000", 00:16:16.894 "listen_address": { 00:16:16.894 "trtype": "TCP", 00:16:16.894 "adrfam": "IPv4", 00:16:16.894 "traddr": "10.0.0.2", 00:16:16.894 "trsvcid": "4420" 00:16:16.894 }, 00:16:16.894 "peer_address": { 00:16:16.894 "trtype": "TCP", 00:16:16.894 "adrfam": "IPv4", 00:16:16.894 "traddr": "10.0.0.1", 00:16:16.894 "trsvcid": "59070" 00:16:16.894 }, 00:16:16.894 "auth": { 00:16:16.894 "state": "completed", 00:16:16.894 "digest": "sha256", 00:16:16.894 "dhgroup": "null" 00:16:16.895 } 00:16:16.895 } 00:16:16.895 ]' 00:16:16.895 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.895 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.895 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.895 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:16.895 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.153 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.153 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.153 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.153 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGZjN2JmNTEzNjVjMzM3ZWZhYTE3OGM5MjA2NzRjZTBlODk4NWYwZGFlMmNlMGZm3oByUA==: --dhchap-ctrl-secret DHHC-1:01:NGIzNDYzMjZiMDNjNTAwYzdkODgyYjc4MjE2YTQwNmLuH8HX: 00:16:18.526 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.526 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:18.526 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.526 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.526 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.526 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.526 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:18.526 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:18.526 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:18.526 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.526 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:18.526 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:18.526 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:18.526 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.526 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:18.526 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.526 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.526 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.526 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:18.526 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:18.784 00:16:18.784 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:18.784 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:18.784 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.042 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.042 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.042 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.042 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.042 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.042 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.042 { 00:16:19.042 "cntlid": 7, 00:16:19.042 "qid": 0, 00:16:19.042 "state": "enabled", 00:16:19.042 "thread": "nvmf_tgt_poll_group_000", 00:16:19.042 "listen_address": { 00:16:19.042 "trtype": "TCP", 00:16:19.042 "adrfam": "IPv4", 00:16:19.042 "traddr": "10.0.0.2", 00:16:19.042 "trsvcid": "4420" 00:16:19.042 }, 00:16:19.042 "peer_address": { 00:16:19.042 "trtype": "TCP", 00:16:19.042 "adrfam": "IPv4", 00:16:19.042 "traddr": "10.0.0.1", 00:16:19.042 "trsvcid": "59088" 00:16:19.042 }, 00:16:19.042 "auth": { 00:16:19.042 "state": "completed", 00:16:19.042 "digest": "sha256", 00:16:19.042 "dhgroup": "null" 00:16:19.042 } 00:16:19.042 } 00:16:19.042 ]' 00:16:19.042 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.042 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.042 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.300 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:19.300 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.300 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.300 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.300 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.558 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAwYzg2Y2U5YmVmNzY0M2RjMzc2OTE2MGE4NjM0ZTlmNTI3OTI3ZDhjMzhhNWZiY2U1M2MyYzZlYjM1NjU3ZEjsMjY=: 00:16:20.491 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.491 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:20.491 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.491 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.491 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.491 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.491 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.492 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:20.492 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:20.749 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:20.749 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.749 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:20.749 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:20.749 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:20.749 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.749 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.749 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.749 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.749 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.749 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.749 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.007 00:16:21.007 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.007 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.007 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.264 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.264 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.264 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.264 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.264 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.264 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.264 { 00:16:21.264 "cntlid": 9, 00:16:21.264 "qid": 0, 00:16:21.264 "state": "enabled", 00:16:21.264 "thread": "nvmf_tgt_poll_group_000", 00:16:21.264 "listen_address": { 00:16:21.264 "trtype": "TCP", 00:16:21.264 "adrfam": "IPv4", 00:16:21.264 "traddr": "10.0.0.2", 00:16:21.264 "trsvcid": "4420" 00:16:21.264 }, 00:16:21.264 "peer_address": { 00:16:21.264 "trtype": "TCP", 00:16:21.264 "adrfam": "IPv4", 00:16:21.264 "traddr": "10.0.0.1", 00:16:21.264 "trsvcid": "59130" 00:16:21.264 }, 00:16:21.264 "auth": { 00:16:21.264 "state": "completed", 00:16:21.264 "digest": "sha256", 00:16:21.264 "dhgroup": "ffdhe2048" 00:16:21.264 } 00:16:21.264 } 00:16:21.264 ]' 00:16:21.264 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.264 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.264 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.264 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:21.264 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.264 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.264 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.264 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.521 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZDM2ZjE3MGQzMThmNzA3ZGQxNzg4OTZlYjU0Y2FkZGM1NTViODdjMGVmYWI5MTE31jBCEA==: --dhchap-ctrl-secret DHHC-1:03:MDkyYTA5OTYyNGQzZGVlOTRhMDYxMDIxNmRhNjMwZjU3YzI2YjUwMWRlNGY0YTg1ZjA3NGUxZTIzOWFjY2Y0Yb4FMSc=: 00:16:22.892 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.892 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:22.892 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.892 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.892 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.892 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:22.892 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:22.892 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:22.892 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:22.892 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.892 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:22.892 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:22.892 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:22.892 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.892 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.892 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.892 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.892 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.892 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.892 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.149 00:16:23.149 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.149 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.149 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.407 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.407 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.407 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.407 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.407 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.407 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.407 { 00:16:23.407 "cntlid": 11, 00:16:23.407 "qid": 0, 00:16:23.407 "state": "enabled", 00:16:23.407 "thread": "nvmf_tgt_poll_group_000", 00:16:23.407 "listen_address": { 00:16:23.407 "trtype": "TCP", 00:16:23.407 "adrfam": "IPv4", 00:16:23.407 "traddr": "10.0.0.2", 00:16:23.407 "trsvcid": "4420" 00:16:23.407 }, 00:16:23.407 "peer_address": { 00:16:23.407 "trtype": "TCP", 00:16:23.407 "adrfam": "IPv4", 00:16:23.407 "traddr": "10.0.0.1", 00:16:23.407 "trsvcid": "59160" 00:16:23.407 }, 00:16:23.407 "auth": { 00:16:23.407 "state": "completed", 00:16:23.407 "digest": "sha256", 00:16:23.407 "dhgroup": "ffdhe2048" 00:16:23.407 } 00:16:23.407 } 00:16:23.407 ]' 00:16:23.407 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.407 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.407 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.407 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:23.407 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.665 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.665 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.665 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.923 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGU2MzYyMGFhOTE4MmE5M2UyYzRlZTY0YjRhZjNmYmSEPhBf: --dhchap-ctrl-secret DHHC-1:02:YjlmOTE5ZmUyYmI5NTlkNDA3OTY3OTRmZGU0ZWFiNmU2MmZlMWVmNjNmMjQzNDE596lAIA==: 00:16:24.856 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.856 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:24.856 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.856 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.856 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.856 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.856 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:24.856 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:25.114 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:25.114 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.114 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:25.114 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:25.114 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:25.114 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.114 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.114 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.114 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.114 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.114 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.114 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.372 00:16:25.372 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.372 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.372 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.630 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.630 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.630 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.630 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.630 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.630 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.630 { 00:16:25.630 "cntlid": 13, 00:16:25.630 "qid": 0, 00:16:25.630 "state": "enabled", 00:16:25.630 "thread": "nvmf_tgt_poll_group_000", 00:16:25.630 "listen_address": { 00:16:25.630 "trtype": "TCP", 00:16:25.630 "adrfam": "IPv4", 00:16:25.630 "traddr": "10.0.0.2", 00:16:25.630 "trsvcid": "4420" 00:16:25.630 }, 00:16:25.630 "peer_address": { 00:16:25.630 "trtype": "TCP", 00:16:25.630 "adrfam": "IPv4", 00:16:25.630 "traddr": "10.0.0.1", 00:16:25.630 "trsvcid": "49160" 00:16:25.630 }, 00:16:25.630 "auth": { 00:16:25.630 "state": "completed", 00:16:25.630 "digest": "sha256", 00:16:25.630 "dhgroup": "ffdhe2048" 00:16:25.630 } 00:16:25.630 } 00:16:25.630 ]' 00:16:25.630 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.630 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.630 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.630 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:25.630 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.630 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.630 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.630 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.888 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGZjN2JmNTEzNjVjMzM3ZWZhYTE3OGM5MjA2NzRjZTBlODk4NWYwZGFlMmNlMGZm3oByUA==: --dhchap-ctrl-secret DHHC-1:01:NGIzNDYzMjZiMDNjNTAwYzdkODgyYjc4MjE2YTQwNmLuH8HX: 00:16:27.323 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.323 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:27.323 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.323 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.323 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.323 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.323 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:27.323 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:27.323 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:27.323 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.323 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:27.323 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:27.323 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:27.323 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.323 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:27.323 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.323 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.323 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.323 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:27.323 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:27.584 00:16:27.584 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.584 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.584 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.842 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.842 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.842 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.842 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.842 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.842 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.842 { 00:16:27.842 "cntlid": 15, 00:16:27.842 "qid": 0, 00:16:27.842 "state": "enabled", 00:16:27.842 "thread": "nvmf_tgt_poll_group_000", 00:16:27.842 "listen_address": { 00:16:27.842 "trtype": "TCP", 00:16:27.842 "adrfam": "IPv4", 00:16:27.842 "traddr": "10.0.0.2", 00:16:27.842 "trsvcid": "4420" 00:16:27.842 }, 00:16:27.842 "peer_address": { 00:16:27.842 "trtype": "TCP", 00:16:27.842 "adrfam": "IPv4", 00:16:27.842 "traddr": "10.0.0.1", 00:16:27.842 "trsvcid": "49192" 00:16:27.842 }, 00:16:27.842 "auth": { 00:16:27.842 "state": "completed", 00:16:27.842 "digest": "sha256", 00:16:27.842 "dhgroup": "ffdhe2048" 00:16:27.842 } 00:16:27.842 } 00:16:27.842 ]' 00:16:27.842 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.842 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.842 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.842 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:27.842 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.099 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.099 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.099 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.357 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAwYzg2Y2U5YmVmNzY0M2RjMzc2OTE2MGE4NjM0ZTlmNTI3OTI3ZDhjMzhhNWZiY2U1M2MyYzZlYjM1NjU3ZEjsMjY=: 00:16:29.290 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.290 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:29.290 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.290 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.290 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.290 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.290 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.290 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:29.290 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:29.548 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:29.548 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.548 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:29.548 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:29.548 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:29.548 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.548 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.548 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.548 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.548 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.548 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.548 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.806 00:16:29.806 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.806 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.806 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:30.064 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.064 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.064 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.064 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.064 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.064 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.064 { 00:16:30.064 "cntlid": 17, 00:16:30.064 "qid": 0, 00:16:30.064 "state": "enabled", 00:16:30.064 "thread": "nvmf_tgt_poll_group_000", 00:16:30.064 "listen_address": { 00:16:30.064 "trtype": "TCP", 00:16:30.064 "adrfam": "IPv4", 00:16:30.064 "traddr": "10.0.0.2", 00:16:30.064 "trsvcid": "4420" 00:16:30.064 }, 00:16:30.064 "peer_address": { 00:16:30.064 "trtype": "TCP", 00:16:30.064 "adrfam": "IPv4", 00:16:30.064 "traddr": "10.0.0.1", 00:16:30.064 "trsvcid": "49234" 00:16:30.064 }, 00:16:30.064 "auth": { 00:16:30.064 "state": "completed", 00:16:30.064 "digest": "sha256", 00:16:30.064 "dhgroup": "ffdhe3072" 00:16:30.064 } 00:16:30.064 } 00:16:30.064 ]' 00:16:30.064 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.064 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.064 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.064 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:30.064 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.064 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.064 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.064 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.322 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZDM2ZjE3MGQzMThmNzA3ZGQxNzg4OTZlYjU0Y2FkZGM1NTViODdjMGVmYWI5MTE31jBCEA==: --dhchap-ctrl-secret DHHC-1:03:MDkyYTA5OTYyNGQzZGVlOTRhMDYxMDIxNmRhNjMwZjU3YzI2YjUwMWRlNGY0YTg1ZjA3NGUxZTIzOWFjY2Y0Yb4FMSc=: 00:16:31.256 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.256 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:31.256 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.256 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.256 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.256 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.256 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:31.256 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:31.514 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:31.514 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.514 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:31.514 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:31.514 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:31.514 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.514 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.514 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.514 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.514 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.514 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.514 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.116 00:16:32.117 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.117 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.117 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.117 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.117 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.117 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.117 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.374 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.374 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.374 { 00:16:32.374 "cntlid": 19, 00:16:32.374 "qid": 0, 00:16:32.374 "state": "enabled", 00:16:32.374 "thread": "nvmf_tgt_poll_group_000", 00:16:32.374 "listen_address": { 00:16:32.374 "trtype": "TCP", 00:16:32.374 "adrfam": "IPv4", 00:16:32.374 "traddr": "10.0.0.2", 00:16:32.374 "trsvcid": "4420" 00:16:32.374 }, 00:16:32.374 "peer_address": { 00:16:32.374 "trtype": "TCP", 00:16:32.374 "adrfam": "IPv4", 00:16:32.374 "traddr": "10.0.0.1", 00:16:32.374 "trsvcid": "49264" 00:16:32.374 }, 00:16:32.374 "auth": { 00:16:32.374 "state": "completed", 00:16:32.374 "digest": "sha256", 00:16:32.374 "dhgroup": "ffdhe3072" 00:16:32.374 } 00:16:32.374 } 00:16:32.374 ]' 00:16:32.374 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.374 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.374 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.374 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:32.374 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.374 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.374 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.374 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.632 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGU2MzYyMGFhOTE4MmE5M2UyYzRlZTY0YjRhZjNmYmSEPhBf: --dhchap-ctrl-secret DHHC-1:02:YjlmOTE5ZmUyYmI5NTlkNDA3OTY3OTRmZGU0ZWFiNmU2MmZlMWVmNjNmMjQzNDE596lAIA==: 00:16:33.564 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.564 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:33.564 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.564 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.564 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.564 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.564 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:33.564 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:33.822 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:33.822 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.822 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:33.822 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:33.822 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:33.822 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.822 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.822 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.822 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.822 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.822 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.822 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.080 00:16:34.080 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.080 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.080 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.337 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.337 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.337 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.337 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.337 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.337 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.337 { 00:16:34.337 "cntlid": 21, 00:16:34.337 "qid": 0, 00:16:34.337 "state": "enabled", 00:16:34.337 "thread": "nvmf_tgt_poll_group_000", 00:16:34.337 "listen_address": { 00:16:34.337 "trtype": "TCP", 00:16:34.337 "adrfam": "IPv4", 00:16:34.337 "traddr": "10.0.0.2", 00:16:34.337 "trsvcid": "4420" 00:16:34.337 }, 00:16:34.337 "peer_address": { 00:16:34.337 "trtype": "TCP", 00:16:34.337 "adrfam": "IPv4", 00:16:34.337 "traddr": "10.0.0.1", 00:16:34.337 "trsvcid": "50140" 00:16:34.337 }, 00:16:34.337 "auth": { 00:16:34.337 "state": "completed", 00:16:34.337 "digest": "sha256", 00:16:34.337 "dhgroup": "ffdhe3072" 00:16:34.337 } 00:16:34.337 } 00:16:34.337 ]' 00:16:34.337 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.337 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.337 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.593 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:34.593 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.593 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.593 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.593 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.850 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGZjN2JmNTEzNjVjMzM3ZWZhYTE3OGM5MjA2NzRjZTBlODk4NWYwZGFlMmNlMGZm3oByUA==: --dhchap-ctrl-secret DHHC-1:01:NGIzNDYzMjZiMDNjNTAwYzdkODgyYjc4MjE2YTQwNmLuH8HX: 00:16:35.780 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.780 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:35.780 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.780 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.780 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.780 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.780 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.780 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.780 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:35.780 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.780 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:35.780 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:35.780 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:35.780 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.780 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:35.780 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.780 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.780 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.780 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:35.780 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:36.344 00:16:36.344 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.344 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.344 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.344 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.344 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.344 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.344 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.344 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.344 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.344 { 00:16:36.344 "cntlid": 23, 00:16:36.344 "qid": 0, 00:16:36.344 "state": "enabled", 00:16:36.344 "thread": "nvmf_tgt_poll_group_000", 00:16:36.344 "listen_address": { 00:16:36.344 "trtype": "TCP", 00:16:36.344 "adrfam": "IPv4", 00:16:36.344 "traddr": "10.0.0.2", 00:16:36.344 "trsvcid": "4420" 00:16:36.344 }, 00:16:36.344 "peer_address": { 00:16:36.344 "trtype": "TCP", 00:16:36.344 "adrfam": "IPv4", 00:16:36.344 "traddr": "10.0.0.1", 00:16:36.344 "trsvcid": "50164" 00:16:36.344 }, 00:16:36.344 "auth": { 00:16:36.344 "state": "completed", 00:16:36.344 "digest": "sha256", 00:16:36.344 "dhgroup": "ffdhe3072" 00:16:36.344 } 00:16:36.344 } 00:16:36.344 ]' 00:16:36.602 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.602 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.602 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.602 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:36.602 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.602 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.602 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.602 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.860 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAwYzg2Y2U5YmVmNzY0M2RjMzc2OTE2MGE4NjM0ZTlmNTI3OTI3ZDhjMzhhNWZiY2U1M2MyYzZlYjM1NjU3ZEjsMjY=: 00:16:37.791 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.791 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:37.791 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.792 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.792 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.792 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:37.792 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.792 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:37.792 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:38.049 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:38.049 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.049 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:38.049 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:38.049 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:38.049 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.049 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.049 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.049 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.049 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.049 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.049 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.614 00:16:38.614 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.614 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.614 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.614 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.614 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.614 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.614 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.614 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.614 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.614 { 00:16:38.614 "cntlid": 25, 00:16:38.614 "qid": 0, 00:16:38.614 "state": "enabled", 00:16:38.614 "thread": "nvmf_tgt_poll_group_000", 00:16:38.614 "listen_address": { 00:16:38.614 "trtype": "TCP", 00:16:38.614 "adrfam": "IPv4", 00:16:38.614 "traddr": "10.0.0.2", 00:16:38.614 "trsvcid": "4420" 00:16:38.614 }, 00:16:38.614 "peer_address": { 00:16:38.614 "trtype": "TCP", 00:16:38.614 "adrfam": "IPv4", 00:16:38.614 "traddr": "10.0.0.1", 00:16:38.614 "trsvcid": "50200" 00:16:38.614 }, 00:16:38.614 "auth": { 00:16:38.614 "state": "completed", 00:16:38.614 "digest": "sha256", 00:16:38.614 "dhgroup": "ffdhe4096" 00:16:38.614 } 00:16:38.614 } 00:16:38.614 ]' 00:16:38.614 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.872 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.872 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.872 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:38.872 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.872 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.872 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.872 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.129 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZDM2ZjE3MGQzMThmNzA3ZGQxNzg4OTZlYjU0Y2FkZGM1NTViODdjMGVmYWI5MTE31jBCEA==: --dhchap-ctrl-secret DHHC-1:03:MDkyYTA5OTYyNGQzZGVlOTRhMDYxMDIxNmRhNjMwZjU3YzI2YjUwMWRlNGY0YTg1ZjA3NGUxZTIzOWFjY2Y0Yb4FMSc=: 00:16:40.061 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.061 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:40.061 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.061 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.061 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.061 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.061 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:40.061 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:40.319 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:40.319 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.319 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:40.319 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:40.319 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:40.319 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.319 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.319 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.319 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.319 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.319 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.319 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.884 00:16:40.884 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.884 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.884 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.141 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.141 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.141 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.141 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.141 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.141 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.141 { 00:16:41.141 "cntlid": 27, 00:16:41.141 "qid": 0, 00:16:41.141 "state": "enabled", 00:16:41.141 "thread": "nvmf_tgt_poll_group_000", 00:16:41.141 "listen_address": { 00:16:41.141 "trtype": "TCP", 00:16:41.141 "adrfam": "IPv4", 00:16:41.141 "traddr": "10.0.0.2", 00:16:41.141 "trsvcid": "4420" 00:16:41.141 }, 00:16:41.141 "peer_address": { 00:16:41.141 "trtype": "TCP", 00:16:41.141 "adrfam": "IPv4", 00:16:41.141 "traddr": "10.0.0.1", 00:16:41.141 "trsvcid": "50230" 00:16:41.141 }, 00:16:41.141 "auth": { 00:16:41.141 "state": "completed", 00:16:41.141 "digest": "sha256", 00:16:41.141 "dhgroup": "ffdhe4096" 00:16:41.141 } 00:16:41.141 } 00:16:41.141 ]' 00:16:41.141 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.141 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.142 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.142 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:41.142 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.142 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.142 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.142 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.399 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGU2MzYyMGFhOTE4MmE5M2UyYzRlZTY0YjRhZjNmYmSEPhBf: --dhchap-ctrl-secret DHHC-1:02:YjlmOTE5ZmUyYmI5NTlkNDA3OTY3OTRmZGU0ZWFiNmU2MmZlMWVmNjNmMjQzNDE596lAIA==: 00:16:42.772 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.772 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:42.772 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.772 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.772 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.772 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.772 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:42.772 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:42.772 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:42.772 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.772 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:42.772 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:42.772 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:42.772 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.772 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.772 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.772 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.772 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.772 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.772 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.029 00:16:43.287 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.287 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.287 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.287 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.287 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.287 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.287 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.544 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.544 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.544 { 00:16:43.544 "cntlid": 29, 00:16:43.544 "qid": 0, 00:16:43.544 "state": "enabled", 00:16:43.544 "thread": "nvmf_tgt_poll_group_000", 00:16:43.544 "listen_address": { 00:16:43.544 "trtype": "TCP", 00:16:43.544 "adrfam": "IPv4", 00:16:43.544 "traddr": "10.0.0.2", 00:16:43.544 "trsvcid": "4420" 00:16:43.544 }, 00:16:43.544 "peer_address": { 00:16:43.544 "trtype": "TCP", 00:16:43.544 "adrfam": "IPv4", 00:16:43.544 "traddr": "10.0.0.1", 00:16:43.544 "trsvcid": "50246" 00:16:43.544 }, 00:16:43.544 "auth": { 00:16:43.544 "state": "completed", 00:16:43.544 "digest": "sha256", 00:16:43.544 "dhgroup": "ffdhe4096" 00:16:43.544 } 00:16:43.544 } 00:16:43.544 ]' 00:16:43.544 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.544 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.544 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.544 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:43.544 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.544 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.544 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.544 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.801 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGZjN2JmNTEzNjVjMzM3ZWZhYTE3OGM5MjA2NzRjZTBlODk4NWYwZGFlMmNlMGZm3oByUA==: --dhchap-ctrl-secret DHHC-1:01:NGIzNDYzMjZiMDNjNTAwYzdkODgyYjc4MjE2YTQwNmLuH8HX: 00:16:44.735 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.735 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:44.735 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.735 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.735 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.735 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.735 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.735 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.993 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:44.993 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.993 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:44.993 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:44.993 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:44.993 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.993 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:44.993 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.993 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.993 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.993 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:44.993 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:45.559 00:16:45.559 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.559 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.559 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.818 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.818 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.818 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.818 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.818 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.818 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.818 { 00:16:45.818 "cntlid": 31, 00:16:45.818 "qid": 0, 00:16:45.818 "state": "enabled", 00:16:45.818 "thread": "nvmf_tgt_poll_group_000", 00:16:45.818 "listen_address": { 00:16:45.818 "trtype": "TCP", 00:16:45.818 "adrfam": "IPv4", 00:16:45.818 "traddr": "10.0.0.2", 00:16:45.818 "trsvcid": "4420" 00:16:45.818 }, 00:16:45.818 "peer_address": { 00:16:45.818 "trtype": "TCP", 00:16:45.818 "adrfam": "IPv4", 00:16:45.818 "traddr": "10.0.0.1", 00:16:45.818 "trsvcid": "41176" 00:16:45.818 }, 00:16:45.818 "auth": { 00:16:45.818 "state": "completed", 00:16:45.818 "digest": "sha256", 00:16:45.818 "dhgroup": "ffdhe4096" 00:16:45.818 } 00:16:45.818 } 00:16:45.818 ]' 00:16:45.818 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.818 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.818 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.818 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:45.818 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.818 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.818 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.818 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.080 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAwYzg2Y2U5YmVmNzY0M2RjMzc2OTE2MGE4NjM0ZTlmNTI3OTI3ZDhjMzhhNWZiY2U1M2MyYzZlYjM1NjU3ZEjsMjY=: 00:16:47.057 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.057 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:47.057 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.057 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.057 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.057 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.057 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.057 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.057 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.315 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:47.315 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.315 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:47.315 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:47.315 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:47.315 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.315 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.315 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.315 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.315 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.315 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.315 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.881 00:16:47.881 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.881 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.881 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:48.139 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.139 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.139 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.139 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.139 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.139 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.139 { 00:16:48.139 "cntlid": 33, 00:16:48.139 "qid": 0, 00:16:48.139 "state": "enabled", 00:16:48.139 "thread": "nvmf_tgt_poll_group_000", 00:16:48.139 "listen_address": { 00:16:48.139 "trtype": "TCP", 00:16:48.139 "adrfam": "IPv4", 00:16:48.139 "traddr": "10.0.0.2", 00:16:48.139 "trsvcid": "4420" 00:16:48.139 }, 00:16:48.139 "peer_address": { 00:16:48.139 "trtype": "TCP", 00:16:48.139 "adrfam": "IPv4", 00:16:48.139 "traddr": "10.0.0.1", 00:16:48.139 "trsvcid": "41210" 00:16:48.139 }, 00:16:48.139 "auth": { 00:16:48.139 "state": "completed", 00:16:48.139 "digest": "sha256", 00:16:48.139 "dhgroup": "ffdhe6144" 00:16:48.139 } 00:16:48.139 } 00:16:48.139 ]' 00:16:48.139 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.139 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.139 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.397 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:48.397 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.397 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.397 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.397 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.655 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZDM2ZjE3MGQzMThmNzA3ZGQxNzg4OTZlYjU0Y2FkZGM1NTViODdjMGVmYWI5MTE31jBCEA==: --dhchap-ctrl-secret DHHC-1:03:MDkyYTA5OTYyNGQzZGVlOTRhMDYxMDIxNmRhNjMwZjU3YzI2YjUwMWRlNGY0YTg1ZjA3NGUxZTIzOWFjY2Y0Yb4FMSc=: 00:16:49.589 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.589 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:49.589 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.589 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.589 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.589 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.589 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:49.589 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:49.847 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:49.847 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.847 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:49.847 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:49.847 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:49.847 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.847 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.847 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.847 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.847 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.847 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.847 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.413 00:16:50.413 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.413 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.413 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.670 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.670 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.670 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.670 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.670 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.670 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.670 { 00:16:50.670 "cntlid": 35, 00:16:50.670 "qid": 0, 00:16:50.670 "state": "enabled", 00:16:50.670 "thread": "nvmf_tgt_poll_group_000", 00:16:50.670 "listen_address": { 00:16:50.670 "trtype": "TCP", 00:16:50.670 "adrfam": "IPv4", 00:16:50.670 "traddr": "10.0.0.2", 00:16:50.670 "trsvcid": "4420" 00:16:50.670 }, 00:16:50.670 "peer_address": { 00:16:50.670 "trtype": "TCP", 00:16:50.670 "adrfam": "IPv4", 00:16:50.670 "traddr": "10.0.0.1", 00:16:50.670 "trsvcid": "41242" 00:16:50.670 }, 00:16:50.670 "auth": { 00:16:50.670 "state": "completed", 00:16:50.670 "digest": "sha256", 00:16:50.670 "dhgroup": "ffdhe6144" 00:16:50.670 } 00:16:50.670 } 00:16:50.670 ]' 00:16:50.670 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.670 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.670 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.670 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:50.671 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.929 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.929 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.929 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.187 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGU2MzYyMGFhOTE4MmE5M2UyYzRlZTY0YjRhZjNmYmSEPhBf: --dhchap-ctrl-secret DHHC-1:02:YjlmOTE5ZmUyYmI5NTlkNDA3OTY3OTRmZGU0ZWFiNmU2MmZlMWVmNjNmMjQzNDE596lAIA==: 00:16:52.121 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.121 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:52.121 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.121 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.121 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.121 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.121 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:52.121 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:52.379 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:52.380 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.380 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:52.380 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:52.380 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:52.380 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.380 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.380 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.380 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.380 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.380 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.380 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.945 00:16:52.945 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.945 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.945 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.945 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.945 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.945 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.945 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.945 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.945 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.945 { 00:16:52.945 "cntlid": 37, 00:16:52.945 "qid": 0, 00:16:52.945 "state": "enabled", 00:16:52.945 "thread": "nvmf_tgt_poll_group_000", 00:16:52.945 "listen_address": { 00:16:52.945 "trtype": "TCP", 00:16:52.945 "adrfam": "IPv4", 00:16:52.945 "traddr": "10.0.0.2", 00:16:52.945 "trsvcid": "4420" 00:16:52.945 }, 00:16:52.945 "peer_address": { 00:16:52.945 "trtype": "TCP", 00:16:52.945 "adrfam": "IPv4", 00:16:52.945 "traddr": "10.0.0.1", 00:16:52.945 "trsvcid": "41270" 00:16:52.945 }, 00:16:52.945 "auth": { 00:16:52.945 "state": "completed", 00:16:52.945 "digest": "sha256", 00:16:52.945 "dhgroup": "ffdhe6144" 00:16:52.945 } 00:16:52.945 } 00:16:52.945 ]' 00:16:52.945 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.203 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.203 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.203 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:53.203 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.203 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.203 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.203 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.461 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGZjN2JmNTEzNjVjMzM3ZWZhYTE3OGM5MjA2NzRjZTBlODk4NWYwZGFlMmNlMGZm3oByUA==: --dhchap-ctrl-secret DHHC-1:01:NGIzNDYzMjZiMDNjNTAwYzdkODgyYjc4MjE2YTQwNmLuH8HX: 00:16:54.395 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.395 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:54.395 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.395 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.395 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.395 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.395 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:54.395 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:54.653 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:54.653 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.653 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:54.653 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:54.653 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:54.653 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.653 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:54.653 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.653 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.654 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.654 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.654 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.219 00:16:55.219 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.219 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.219 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.477 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.477 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.477 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.477 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.477 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.477 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.477 { 00:16:55.477 "cntlid": 39, 00:16:55.477 "qid": 0, 00:16:55.477 "state": "enabled", 00:16:55.477 "thread": "nvmf_tgt_poll_group_000", 00:16:55.477 "listen_address": { 00:16:55.477 "trtype": "TCP", 00:16:55.477 "adrfam": "IPv4", 00:16:55.477 "traddr": "10.0.0.2", 00:16:55.477 "trsvcid": "4420" 00:16:55.477 }, 00:16:55.477 "peer_address": { 00:16:55.477 "trtype": "TCP", 00:16:55.477 "adrfam": "IPv4", 00:16:55.477 "traddr": "10.0.0.1", 00:16:55.477 "trsvcid": "49082" 00:16:55.477 }, 00:16:55.477 "auth": { 00:16:55.477 "state": "completed", 00:16:55.477 "digest": "sha256", 00:16:55.477 "dhgroup": "ffdhe6144" 00:16:55.477 } 00:16:55.477 } 00:16:55.477 ]' 00:16:55.477 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.477 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.477 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.735 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:55.735 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.735 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.735 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.735 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.993 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAwYzg2Y2U5YmVmNzY0M2RjMzc2OTE2MGE4NjM0ZTlmNTI3OTI3ZDhjMzhhNWZiY2U1M2MyYzZlYjM1NjU3ZEjsMjY=: 00:16:56.928 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.928 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:56.928 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.928 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.928 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.928 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.928 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.928 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.928 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:57.186 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:57.186 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.186 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:57.186 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:57.186 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:57.186 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.186 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.186 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.186 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.186 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.186 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.186 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.119 00:16:58.119 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.119 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.119 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.377 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.377 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.377 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.377 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.377 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.377 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.377 { 00:16:58.377 "cntlid": 41, 00:16:58.377 "qid": 0, 00:16:58.377 "state": "enabled", 00:16:58.377 "thread": "nvmf_tgt_poll_group_000", 00:16:58.377 "listen_address": { 00:16:58.377 "trtype": "TCP", 00:16:58.377 "adrfam": "IPv4", 00:16:58.377 "traddr": "10.0.0.2", 00:16:58.377 "trsvcid": "4420" 00:16:58.377 }, 00:16:58.377 "peer_address": { 00:16:58.377 "trtype": "TCP", 00:16:58.377 "adrfam": "IPv4", 00:16:58.377 "traddr": "10.0.0.1", 00:16:58.377 "trsvcid": "49116" 00:16:58.377 }, 00:16:58.377 "auth": { 00:16:58.377 "state": "completed", 00:16:58.377 "digest": "sha256", 00:16:58.377 "dhgroup": "ffdhe8192" 00:16:58.377 } 00:16:58.377 } 00:16:58.377 ]' 00:16:58.377 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.377 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.377 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.377 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:58.377 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.635 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.635 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.635 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.893 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZDM2ZjE3MGQzMThmNzA3ZGQxNzg4OTZlYjU0Y2FkZGM1NTViODdjMGVmYWI5MTE31jBCEA==: --dhchap-ctrl-secret DHHC-1:03:MDkyYTA5OTYyNGQzZGVlOTRhMDYxMDIxNmRhNjMwZjU3YzI2YjUwMWRlNGY0YTg1ZjA3NGUxZTIzOWFjY2Y0Yb4FMSc=: 00:16:59.826 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.826 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:59.826 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.826 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.826 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.826 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.826 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:59.826 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:00.083 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:00.084 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.084 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:00.084 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:00.084 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:00.084 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.084 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.084 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.084 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.084 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.084 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.084 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.017 00:17:01.017 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.017 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.017 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.275 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.275 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.275 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.275 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.275 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.275 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.275 { 00:17:01.275 "cntlid": 43, 00:17:01.275 "qid": 0, 00:17:01.275 "state": "enabled", 00:17:01.275 "thread": "nvmf_tgt_poll_group_000", 00:17:01.275 "listen_address": { 00:17:01.275 "trtype": "TCP", 00:17:01.275 "adrfam": "IPv4", 00:17:01.275 "traddr": "10.0.0.2", 00:17:01.275 "trsvcid": "4420" 00:17:01.275 }, 00:17:01.275 "peer_address": { 00:17:01.275 "trtype": "TCP", 00:17:01.275 "adrfam": "IPv4", 00:17:01.275 "traddr": "10.0.0.1", 00:17:01.275 "trsvcid": "49136" 00:17:01.275 }, 00:17:01.275 "auth": { 00:17:01.275 "state": "completed", 00:17:01.275 "digest": "sha256", 00:17:01.275 "dhgroup": "ffdhe8192" 00:17:01.275 } 00:17:01.275 } 00:17:01.275 ]' 00:17:01.275 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.275 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.275 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.275 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:01.275 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.533 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.533 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.533 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.790 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGU2MzYyMGFhOTE4MmE5M2UyYzRlZTY0YjRhZjNmYmSEPhBf: --dhchap-ctrl-secret DHHC-1:02:YjlmOTE5ZmUyYmI5NTlkNDA3OTY3OTRmZGU0ZWFiNmU2MmZlMWVmNjNmMjQzNDE596lAIA==: 00:17:02.724 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.724 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:02.724 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.724 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.724 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.724 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.724 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:02.724 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:02.982 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:02.982 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.982 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:02.982 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:02.982 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:02.982 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.982 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.982 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.982 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.982 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.982 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.982 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.915 00:17:03.915 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.915 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:03.915 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.173 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.173 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.173 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.173 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.173 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.173 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.173 { 00:17:04.173 "cntlid": 45, 00:17:04.173 "qid": 0, 00:17:04.173 "state": "enabled", 00:17:04.173 "thread": "nvmf_tgt_poll_group_000", 00:17:04.173 "listen_address": { 00:17:04.173 "trtype": "TCP", 00:17:04.173 "adrfam": "IPv4", 00:17:04.173 "traddr": "10.0.0.2", 00:17:04.173 "trsvcid": "4420" 00:17:04.173 }, 00:17:04.173 "peer_address": { 00:17:04.173 "trtype": "TCP", 00:17:04.173 "adrfam": "IPv4", 00:17:04.173 "traddr": "10.0.0.1", 00:17:04.173 "trsvcid": "49164" 00:17:04.173 }, 00:17:04.173 "auth": { 00:17:04.173 "state": "completed", 00:17:04.173 "digest": "sha256", 00:17:04.173 "dhgroup": "ffdhe8192" 00:17:04.173 } 00:17:04.173 } 00:17:04.173 ]' 00:17:04.173 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.173 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.173 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.173 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:04.173 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.173 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.173 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.173 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.431 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGZjN2JmNTEzNjVjMzM3ZWZhYTE3OGM5MjA2NzRjZTBlODk4NWYwZGFlMmNlMGZm3oByUA==: --dhchap-ctrl-secret DHHC-1:01:NGIzNDYzMjZiMDNjNTAwYzdkODgyYjc4MjE2YTQwNmLuH8HX: 00:17:05.379 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.379 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:05.379 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.379 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.379 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.379 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.379 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:05.379 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:05.671 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:05.671 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.671 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:05.671 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:05.671 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:05.671 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.671 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:05.671 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.671 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.671 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.671 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.671 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.605 00:17:06.605 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.605 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.605 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.863 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.863 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.863 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.863 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.863 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.863 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.863 { 00:17:06.863 "cntlid": 47, 00:17:06.863 "qid": 0, 00:17:06.863 "state": "enabled", 00:17:06.863 "thread": "nvmf_tgt_poll_group_000", 00:17:06.863 "listen_address": { 00:17:06.863 "trtype": "TCP", 00:17:06.863 "adrfam": "IPv4", 00:17:06.863 "traddr": "10.0.0.2", 00:17:06.863 "trsvcid": "4420" 00:17:06.863 }, 00:17:06.863 "peer_address": { 00:17:06.863 "trtype": "TCP", 00:17:06.863 "adrfam": "IPv4", 00:17:06.863 "traddr": "10.0.0.1", 00:17:06.863 "trsvcid": "34442" 00:17:06.863 }, 00:17:06.863 "auth": { 00:17:06.863 "state": "completed", 00:17:06.863 "digest": "sha256", 00:17:06.863 "dhgroup": "ffdhe8192" 00:17:06.863 } 00:17:06.863 } 00:17:06.863 ]' 00:17:06.863 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.121 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:07.121 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.121 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:07.121 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.121 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.121 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.121 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.386 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAwYzg2Y2U5YmVmNzY0M2RjMzc2OTE2MGE4NjM0ZTlmNTI3OTI3ZDhjMzhhNWZiY2U1M2MyYzZlYjM1NjU3ZEjsMjY=: 00:17:08.321 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.321 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:08.321 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.321 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.321 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.321 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:08.321 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.321 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.321 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:08.321 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:08.579 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:08.579 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.579 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:08.579 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:08.579 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:08.579 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.579 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.579 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.579 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.579 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.579 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.579 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.838 00:17:08.838 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.838 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.838 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.096 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.096 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.096 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.096 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.096 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.096 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.096 { 00:17:09.096 "cntlid": 49, 00:17:09.096 "qid": 0, 00:17:09.096 "state": "enabled", 00:17:09.096 "thread": "nvmf_tgt_poll_group_000", 00:17:09.096 "listen_address": { 00:17:09.096 "trtype": "TCP", 00:17:09.096 "adrfam": "IPv4", 00:17:09.096 "traddr": "10.0.0.2", 00:17:09.096 "trsvcid": "4420" 00:17:09.096 }, 00:17:09.096 "peer_address": { 00:17:09.096 "trtype": "TCP", 00:17:09.096 "adrfam": "IPv4", 00:17:09.096 "traddr": "10.0.0.1", 00:17:09.096 "trsvcid": "34468" 00:17:09.096 }, 00:17:09.096 "auth": { 00:17:09.096 "state": "completed", 00:17:09.096 "digest": "sha384", 00:17:09.096 "dhgroup": "null" 00:17:09.096 } 00:17:09.096 } 00:17:09.096 ]' 00:17:09.096 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.096 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.096 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.354 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:09.354 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.354 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.354 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.354 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.612 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZDM2ZjE3MGQzMThmNzA3ZGQxNzg4OTZlYjU0Y2FkZGM1NTViODdjMGVmYWI5MTE31jBCEA==: --dhchap-ctrl-secret DHHC-1:03:MDkyYTA5OTYyNGQzZGVlOTRhMDYxMDIxNmRhNjMwZjU3YzI2YjUwMWRlNGY0YTg1ZjA3NGUxZTIzOWFjY2Y0Yb4FMSc=: 00:17:10.546 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.546 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:10.546 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.546 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.546 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.546 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.546 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:10.546 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:10.804 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:10.804 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.804 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:10.804 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:10.804 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:10.804 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.804 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.804 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.804 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.804 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.804 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.804 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.060 00:17:11.060 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.060 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.060 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.317 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.317 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.317 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.317 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.317 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.317 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.317 { 00:17:11.317 "cntlid": 51, 00:17:11.317 "qid": 0, 00:17:11.317 "state": "enabled", 00:17:11.317 "thread": "nvmf_tgt_poll_group_000", 00:17:11.317 "listen_address": { 00:17:11.317 "trtype": "TCP", 00:17:11.317 "adrfam": "IPv4", 00:17:11.317 "traddr": "10.0.0.2", 00:17:11.317 "trsvcid": "4420" 00:17:11.317 }, 00:17:11.317 "peer_address": { 00:17:11.317 "trtype": "TCP", 00:17:11.317 "adrfam": "IPv4", 00:17:11.317 "traddr": "10.0.0.1", 00:17:11.317 "trsvcid": "34496" 00:17:11.317 }, 00:17:11.317 "auth": { 00:17:11.317 "state": "completed", 00:17:11.317 "digest": "sha384", 00:17:11.317 "dhgroup": "null" 00:17:11.317 } 00:17:11.317 } 00:17:11.317 ]' 00:17:11.317 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.317 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.317 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.317 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:11.317 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.317 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.317 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.317 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.574 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGU2MzYyMGFhOTE4MmE5M2UyYzRlZTY0YjRhZjNmYmSEPhBf: --dhchap-ctrl-secret DHHC-1:02:YjlmOTE5ZmUyYmI5NTlkNDA3OTY3OTRmZGU0ZWFiNmU2MmZlMWVmNjNmMjQzNDE596lAIA==: 00:17:12.947 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.947 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:12.947 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.947 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.947 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.947 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.947 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:12.947 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:12.947 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:12.947 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.947 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:12.947 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:12.947 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:12.947 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.947 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.947 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.947 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.947 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.947 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.947 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.204 00:17:13.204 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.204 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.204 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.462 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.462 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.462 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.462 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.462 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.462 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.462 { 00:17:13.462 "cntlid": 53, 00:17:13.462 "qid": 0, 00:17:13.462 "state": "enabled", 00:17:13.462 "thread": "nvmf_tgt_poll_group_000", 00:17:13.462 "listen_address": { 00:17:13.462 "trtype": "TCP", 00:17:13.462 "adrfam": "IPv4", 00:17:13.462 "traddr": "10.0.0.2", 00:17:13.462 "trsvcid": "4420" 00:17:13.462 }, 00:17:13.462 "peer_address": { 00:17:13.462 "trtype": "TCP", 00:17:13.462 "adrfam": "IPv4", 00:17:13.462 "traddr": "10.0.0.1", 00:17:13.462 "trsvcid": "34518" 00:17:13.462 }, 00:17:13.462 "auth": { 00:17:13.462 "state": "completed", 00:17:13.462 "digest": "sha384", 00:17:13.462 "dhgroup": "null" 00:17:13.462 } 00:17:13.462 } 00:17:13.462 ]' 00:17:13.462 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.462 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.462 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.719 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:13.719 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.719 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.719 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.719 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.976 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGZjN2JmNTEzNjVjMzM3ZWZhYTE3OGM5MjA2NzRjZTBlODk4NWYwZGFlMmNlMGZm3oByUA==: --dhchap-ctrl-secret DHHC-1:01:NGIzNDYzMjZiMDNjNTAwYzdkODgyYjc4MjE2YTQwNmLuH8HX: 00:17:14.907 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.907 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:14.907 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.907 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.907 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.907 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.907 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:14.907 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:15.165 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:15.165 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.165 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:15.165 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:15.165 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:15.165 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.165 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:15.165 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.165 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.165 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.165 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:15.165 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:15.421 00:17:15.421 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.421 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.421 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.678 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.678 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.678 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.678 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.678 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.678 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:15.678 { 00:17:15.678 "cntlid": 55, 00:17:15.678 "qid": 0, 00:17:15.678 "state": "enabled", 00:17:15.678 "thread": "nvmf_tgt_poll_group_000", 00:17:15.678 "listen_address": { 00:17:15.678 "trtype": "TCP", 00:17:15.678 "adrfam": "IPv4", 00:17:15.678 "traddr": "10.0.0.2", 00:17:15.678 "trsvcid": "4420" 00:17:15.678 }, 00:17:15.678 "peer_address": { 00:17:15.678 "trtype": "TCP", 00:17:15.678 "adrfam": "IPv4", 00:17:15.678 "traddr": "10.0.0.1", 00:17:15.678 "trsvcid": "34440" 00:17:15.678 }, 00:17:15.678 "auth": { 00:17:15.678 "state": "completed", 00:17:15.678 "digest": "sha384", 00:17:15.678 "dhgroup": "null" 00:17:15.678 } 00:17:15.678 } 00:17:15.678 ]' 00:17:15.678 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:15.678 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.678 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:15.934 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:15.934 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.934 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.934 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.934 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.190 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAwYzg2Y2U5YmVmNzY0M2RjMzc2OTE2MGE4NjM0ZTlmNTI3OTI3ZDhjMzhhNWZiY2U1M2MyYzZlYjM1NjU3ZEjsMjY=: 00:17:17.118 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.118 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:17.118 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.118 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.118 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.118 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.118 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.119 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:17.119 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:17.375 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:17.375 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.375 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:17.375 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:17.375 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:17.375 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.375 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.375 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.375 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.375 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.375 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.376 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.632 00:17:17.632 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.632 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.632 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:17.889 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.889 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.889 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.889 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.889 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.889 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:17.889 { 00:17:17.889 "cntlid": 57, 00:17:17.889 "qid": 0, 00:17:17.889 "state": "enabled", 00:17:17.889 "thread": "nvmf_tgt_poll_group_000", 00:17:17.889 "listen_address": { 00:17:17.889 "trtype": "TCP", 00:17:17.889 "adrfam": "IPv4", 00:17:17.889 "traddr": "10.0.0.2", 00:17:17.889 "trsvcid": "4420" 00:17:17.889 }, 00:17:17.889 "peer_address": { 00:17:17.889 "trtype": "TCP", 00:17:17.889 "adrfam": "IPv4", 00:17:17.889 "traddr": "10.0.0.1", 00:17:17.889 "trsvcid": "34470" 00:17:17.889 }, 00:17:17.889 "auth": { 00:17:17.889 "state": "completed", 00:17:17.889 "digest": "sha384", 00:17:17.889 "dhgroup": "ffdhe2048" 00:17:17.889 } 00:17:17.889 } 00:17:17.889 ]' 00:17:17.889 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:17.889 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.889 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.146 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:18.146 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.146 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.146 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.146 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.404 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZDM2ZjE3MGQzMThmNzA3ZGQxNzg4OTZlYjU0Y2FkZGM1NTViODdjMGVmYWI5MTE31jBCEA==: --dhchap-ctrl-secret DHHC-1:03:MDkyYTA5OTYyNGQzZGVlOTRhMDYxMDIxNmRhNjMwZjU3YzI2YjUwMWRlNGY0YTg1ZjA3NGUxZTIzOWFjY2Y0Yb4FMSc=: 00:17:19.335 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.335 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:19.335 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.335 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.335 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.335 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.335 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:19.335 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:19.593 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:19.593 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.593 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:19.593 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:19.593 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:19.593 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.593 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.593 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.593 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.593 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.593 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.593 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.157 00:17:20.157 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.157 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.157 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.157 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.157 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.157 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.157 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.157 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.157 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.157 { 00:17:20.157 "cntlid": 59, 00:17:20.157 "qid": 0, 00:17:20.157 "state": "enabled", 00:17:20.157 "thread": "nvmf_tgt_poll_group_000", 00:17:20.157 "listen_address": { 00:17:20.157 "trtype": "TCP", 00:17:20.157 "adrfam": "IPv4", 00:17:20.157 "traddr": "10.0.0.2", 00:17:20.157 "trsvcid": "4420" 00:17:20.157 }, 00:17:20.157 "peer_address": { 00:17:20.157 "trtype": "TCP", 00:17:20.157 "adrfam": "IPv4", 00:17:20.157 "traddr": "10.0.0.1", 00:17:20.157 "trsvcid": "34504" 00:17:20.157 }, 00:17:20.157 "auth": { 00:17:20.157 "state": "completed", 00:17:20.157 "digest": "sha384", 00:17:20.157 "dhgroup": "ffdhe2048" 00:17:20.157 } 00:17:20.157 } 00:17:20.157 ]' 00:17:20.157 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.414 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.414 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.414 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:20.414 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.414 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.414 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.414 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.707 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGU2MzYyMGFhOTE4MmE5M2UyYzRlZTY0YjRhZjNmYmSEPhBf: --dhchap-ctrl-secret DHHC-1:02:YjlmOTE5ZmUyYmI5NTlkNDA3OTY3OTRmZGU0ZWFiNmU2MmZlMWVmNjNmMjQzNDE596lAIA==: 00:17:21.638 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.638 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:21.638 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.638 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.638 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.638 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.638 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:21.638 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:21.896 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:21.896 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.896 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:21.896 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:21.896 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:21.896 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.896 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.896 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.896 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.896 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.896 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.896 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.154 00:17:22.154 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.154 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.154 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.411 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.411 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.411 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.411 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.411 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.411 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.411 { 00:17:22.411 "cntlid": 61, 00:17:22.411 "qid": 0, 00:17:22.411 "state": "enabled", 00:17:22.411 "thread": "nvmf_tgt_poll_group_000", 00:17:22.411 "listen_address": { 00:17:22.411 "trtype": "TCP", 00:17:22.411 "adrfam": "IPv4", 00:17:22.411 "traddr": "10.0.0.2", 00:17:22.411 "trsvcid": "4420" 00:17:22.411 }, 00:17:22.411 "peer_address": { 00:17:22.411 "trtype": "TCP", 00:17:22.411 "adrfam": "IPv4", 00:17:22.411 "traddr": "10.0.0.1", 00:17:22.411 "trsvcid": "34546" 00:17:22.411 }, 00:17:22.411 "auth": { 00:17:22.411 "state": "completed", 00:17:22.411 "digest": "sha384", 00:17:22.411 "dhgroup": "ffdhe2048" 00:17:22.411 } 00:17:22.411 } 00:17:22.411 ]' 00:17:22.411 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.669 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.669 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.669 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:22.669 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.669 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.669 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.669 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.928 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGZjN2JmNTEzNjVjMzM3ZWZhYTE3OGM5MjA2NzRjZTBlODk4NWYwZGFlMmNlMGZm3oByUA==: --dhchap-ctrl-secret DHHC-1:01:NGIzNDYzMjZiMDNjNTAwYzdkODgyYjc4MjE2YTQwNmLuH8HX: 00:17:23.862 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.862 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:23.862 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.862 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.862 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.862 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.862 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:23.862 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:24.120 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:24.120 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.120 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:24.120 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:24.120 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:24.120 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.120 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:24.120 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.120 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.120 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.120 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:24.120 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:24.716 00:17:24.716 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.716 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.716 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.716 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.716 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.716 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.716 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.716 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.716 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.716 { 00:17:24.716 "cntlid": 63, 00:17:24.716 "qid": 0, 00:17:24.716 "state": "enabled", 00:17:24.716 "thread": "nvmf_tgt_poll_group_000", 00:17:24.716 "listen_address": { 00:17:24.716 "trtype": "TCP", 00:17:24.716 "adrfam": "IPv4", 00:17:24.716 "traddr": "10.0.0.2", 00:17:24.716 "trsvcid": "4420" 00:17:24.716 }, 00:17:24.716 "peer_address": { 00:17:24.716 "trtype": "TCP", 00:17:24.716 "adrfam": "IPv4", 00:17:24.716 "traddr": "10.0.0.1", 00:17:24.716 "trsvcid": "52586" 00:17:24.716 }, 00:17:24.716 "auth": { 00:17:24.716 "state": "completed", 00:17:24.716 "digest": "sha384", 00:17:24.716 "dhgroup": "ffdhe2048" 00:17:24.716 } 00:17:24.716 } 00:17:24.716 ]' 00:17:24.716 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.974 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.974 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.974 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:24.974 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.974 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.974 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.974 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.233 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAwYzg2Y2U5YmVmNzY0M2RjMzc2OTE2MGE4NjM0ZTlmNTI3OTI3ZDhjMzhhNWZiY2U1M2MyYzZlYjM1NjU3ZEjsMjY=: 00:17:26.166 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.166 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:26.166 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.166 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.166 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.166 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.166 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.166 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:26.166 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:26.424 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:26.424 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.424 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:26.424 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:26.424 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:26.424 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.424 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.424 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.424 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.424 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.424 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.424 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.682 00:17:26.682 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.682 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.682 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.939 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.940 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.940 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.940 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.940 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.940 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.940 { 00:17:26.940 "cntlid": 65, 00:17:26.940 "qid": 0, 00:17:26.940 "state": "enabled", 00:17:26.940 "thread": "nvmf_tgt_poll_group_000", 00:17:26.940 "listen_address": { 00:17:26.940 "trtype": "TCP", 00:17:26.940 "adrfam": "IPv4", 00:17:26.940 "traddr": "10.0.0.2", 00:17:26.940 "trsvcid": "4420" 00:17:26.940 }, 00:17:26.940 "peer_address": { 00:17:26.940 "trtype": "TCP", 00:17:26.940 "adrfam": "IPv4", 00:17:26.940 "traddr": "10.0.0.1", 00:17:26.940 "trsvcid": "52616" 00:17:26.940 }, 00:17:26.940 "auth": { 00:17:26.940 "state": "completed", 00:17:26.940 "digest": "sha384", 00:17:26.940 "dhgroup": "ffdhe3072" 00:17:26.940 } 00:17:26.940 } 00:17:26.940 ]' 00:17:26.940 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.940 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.940 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.197 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:27.197 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.197 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.197 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.197 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.455 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZDM2ZjE3MGQzMThmNzA3ZGQxNzg4OTZlYjU0Y2FkZGM1NTViODdjMGVmYWI5MTE31jBCEA==: --dhchap-ctrl-secret DHHC-1:03:MDkyYTA5OTYyNGQzZGVlOTRhMDYxMDIxNmRhNjMwZjU3YzI2YjUwMWRlNGY0YTg1ZjA3NGUxZTIzOWFjY2Y0Yb4FMSc=: 00:17:28.389 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.389 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:28.389 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.389 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.389 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.389 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.389 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:28.389 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:28.647 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:28.647 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.647 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:28.647 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:28.647 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:28.647 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.647 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.647 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.647 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.647 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.647 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.647 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.905 00:17:28.905 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.905 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.905 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.163 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.163 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.163 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.163 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.163 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.163 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.163 { 00:17:29.163 "cntlid": 67, 00:17:29.163 "qid": 0, 00:17:29.163 "state": "enabled", 00:17:29.163 "thread": "nvmf_tgt_poll_group_000", 00:17:29.163 "listen_address": { 00:17:29.163 "trtype": "TCP", 00:17:29.163 "adrfam": "IPv4", 00:17:29.163 "traddr": "10.0.0.2", 00:17:29.163 "trsvcid": "4420" 00:17:29.163 }, 00:17:29.163 "peer_address": { 00:17:29.163 "trtype": "TCP", 00:17:29.163 "adrfam": "IPv4", 00:17:29.163 "traddr": "10.0.0.1", 00:17:29.163 "trsvcid": "52644" 00:17:29.163 }, 00:17:29.163 "auth": { 00:17:29.163 "state": "completed", 00:17:29.163 "digest": "sha384", 00:17:29.163 "dhgroup": "ffdhe3072" 00:17:29.163 } 00:17:29.163 } 00:17:29.163 ]' 00:17:29.163 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.163 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.163 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.421 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:29.421 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.421 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.421 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.421 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.678 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGU2MzYyMGFhOTE4MmE5M2UyYzRlZTY0YjRhZjNmYmSEPhBf: --dhchap-ctrl-secret DHHC-1:02:YjlmOTE5ZmUyYmI5NTlkNDA3OTY3OTRmZGU0ZWFiNmU2MmZlMWVmNjNmMjQzNDE596lAIA==: 00:17:30.611 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.611 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:30.611 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.611 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.611 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.611 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.611 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:30.611 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:30.869 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:30.869 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.869 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:30.869 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:30.869 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:30.869 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.869 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.869 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.869 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.869 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.869 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.869 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.129 00:17:31.129 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.129 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.129 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.386 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.386 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.386 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.386 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.386 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.386 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.386 { 00:17:31.386 "cntlid": 69, 00:17:31.386 "qid": 0, 00:17:31.386 "state": "enabled", 00:17:31.386 "thread": "nvmf_tgt_poll_group_000", 00:17:31.386 "listen_address": { 00:17:31.386 "trtype": "TCP", 00:17:31.386 "adrfam": "IPv4", 00:17:31.386 "traddr": "10.0.0.2", 00:17:31.386 "trsvcid": "4420" 00:17:31.386 }, 00:17:31.386 "peer_address": { 00:17:31.386 "trtype": "TCP", 00:17:31.386 "adrfam": "IPv4", 00:17:31.386 "traddr": "10.0.0.1", 00:17:31.386 "trsvcid": "52674" 00:17:31.386 }, 00:17:31.386 "auth": { 00:17:31.386 "state": "completed", 00:17:31.386 "digest": "sha384", 00:17:31.386 "dhgroup": "ffdhe3072" 00:17:31.386 } 00:17:31.386 } 00:17:31.386 ]' 00:17:31.386 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.386 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.386 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.644 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:31.644 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.644 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.644 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.644 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.902 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGZjN2JmNTEzNjVjMzM3ZWZhYTE3OGM5MjA2NzRjZTBlODk4NWYwZGFlMmNlMGZm3oByUA==: --dhchap-ctrl-secret DHHC-1:01:NGIzNDYzMjZiMDNjNTAwYzdkODgyYjc4MjE2YTQwNmLuH8HX: 00:17:32.835 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.835 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:32.835 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.835 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.835 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.835 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.835 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:32.835 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:33.093 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:33.093 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.093 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:33.093 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:33.093 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:33.093 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.093 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:33.093 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.093 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.093 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.093 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.093 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.658 00:17:33.658 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.658 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.658 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.658 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.658 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.658 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.658 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.659 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.659 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.659 { 00:17:33.659 "cntlid": 71, 00:17:33.659 "qid": 0, 00:17:33.659 "state": "enabled", 00:17:33.659 "thread": "nvmf_tgt_poll_group_000", 00:17:33.659 "listen_address": { 00:17:33.659 "trtype": "TCP", 00:17:33.659 "adrfam": "IPv4", 00:17:33.659 "traddr": "10.0.0.2", 00:17:33.659 "trsvcid": "4420" 00:17:33.659 }, 00:17:33.659 "peer_address": { 00:17:33.659 "trtype": "TCP", 00:17:33.659 "adrfam": "IPv4", 00:17:33.659 "traddr": "10.0.0.1", 00:17:33.659 "trsvcid": "33856" 00:17:33.659 }, 00:17:33.659 "auth": { 00:17:33.659 "state": "completed", 00:17:33.659 "digest": "sha384", 00:17:33.659 "dhgroup": "ffdhe3072" 00:17:33.659 } 00:17:33.659 } 00:17:33.659 ]' 00:17:33.659 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.917 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.917 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.917 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:33.917 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.917 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.917 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.917 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.175 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAwYzg2Y2U5YmVmNzY0M2RjMzc2OTE2MGE4NjM0ZTlmNTI3OTI3ZDhjMzhhNWZiY2U1M2MyYzZlYjM1NjU3ZEjsMjY=: 00:17:35.108 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.108 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:35.108 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.108 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.108 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.108 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.108 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.108 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:35.108 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:35.366 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:35.366 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.367 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:35.367 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:35.367 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:35.367 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.367 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.367 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.367 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.367 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.367 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.367 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.933 00:17:35.933 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.933 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.933 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.933 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.933 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.933 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.933 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.933 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.933 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.933 { 00:17:35.933 "cntlid": 73, 00:17:35.933 "qid": 0, 00:17:35.933 "state": "enabled", 00:17:35.933 "thread": "nvmf_tgt_poll_group_000", 00:17:35.933 "listen_address": { 00:17:35.933 "trtype": "TCP", 00:17:35.933 "adrfam": "IPv4", 00:17:35.933 "traddr": "10.0.0.2", 00:17:35.933 "trsvcid": "4420" 00:17:35.933 }, 00:17:35.933 "peer_address": { 00:17:35.933 "trtype": "TCP", 00:17:35.933 "adrfam": "IPv4", 00:17:35.933 "traddr": "10.0.0.1", 00:17:35.933 "trsvcid": "33892" 00:17:35.933 }, 00:17:35.933 "auth": { 00:17:35.933 "state": "completed", 00:17:35.933 "digest": "sha384", 00:17:35.933 "dhgroup": "ffdhe4096" 00:17:35.933 } 00:17:35.933 } 00:17:35.933 ]' 00:17:35.933 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.191 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.191 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.191 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:36.191 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.191 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.191 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.191 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.449 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZDM2ZjE3MGQzMThmNzA3ZGQxNzg4OTZlYjU0Y2FkZGM1NTViODdjMGVmYWI5MTE31jBCEA==: --dhchap-ctrl-secret DHHC-1:03:MDkyYTA5OTYyNGQzZGVlOTRhMDYxMDIxNmRhNjMwZjU3YzI2YjUwMWRlNGY0YTg1ZjA3NGUxZTIzOWFjY2Y0Yb4FMSc=: 00:17:37.382 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.382 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:37.382 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.382 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.382 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.382 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.382 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:37.382 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:37.640 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:37.641 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.641 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:37.641 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:37.641 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:37.641 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.641 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.641 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.641 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.641 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.641 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.641 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.207 00:17:38.207 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.207 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.207 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.464 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.464 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.464 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.464 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.464 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.464 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.464 { 00:17:38.464 "cntlid": 75, 00:17:38.464 "qid": 0, 00:17:38.464 "state": "enabled", 00:17:38.464 "thread": "nvmf_tgt_poll_group_000", 00:17:38.464 "listen_address": { 00:17:38.464 "trtype": "TCP", 00:17:38.464 "adrfam": "IPv4", 00:17:38.464 "traddr": "10.0.0.2", 00:17:38.464 "trsvcid": "4420" 00:17:38.464 }, 00:17:38.464 "peer_address": { 00:17:38.464 "trtype": "TCP", 00:17:38.464 "adrfam": "IPv4", 00:17:38.464 "traddr": "10.0.0.1", 00:17:38.464 "trsvcid": "33920" 00:17:38.464 }, 00:17:38.464 "auth": { 00:17:38.464 "state": "completed", 00:17:38.464 "digest": "sha384", 00:17:38.464 "dhgroup": "ffdhe4096" 00:17:38.464 } 00:17:38.464 } 00:17:38.464 ]' 00:17:38.464 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.464 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.464 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.464 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:38.464 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.464 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.464 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.464 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.720 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGU2MzYyMGFhOTE4MmE5M2UyYzRlZTY0YjRhZjNmYmSEPhBf: --dhchap-ctrl-secret DHHC-1:02:YjlmOTE5ZmUyYmI5NTlkNDA3OTY3OTRmZGU0ZWFiNmU2MmZlMWVmNjNmMjQzNDE596lAIA==: 00:17:40.093 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.093 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:40.093 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.093 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.093 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.093 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.093 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.093 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.093 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:40.093 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.093 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:40.093 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:40.093 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:40.093 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.093 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.093 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.093 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.093 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.093 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.093 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.659 00:17:40.659 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.659 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.659 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.659 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.659 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.659 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.659 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.659 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.659 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.659 { 00:17:40.659 "cntlid": 77, 00:17:40.659 "qid": 0, 00:17:40.659 "state": "enabled", 00:17:40.659 "thread": "nvmf_tgt_poll_group_000", 00:17:40.659 "listen_address": { 00:17:40.659 "trtype": "TCP", 00:17:40.659 "adrfam": "IPv4", 00:17:40.659 "traddr": "10.0.0.2", 00:17:40.659 "trsvcid": "4420" 00:17:40.659 }, 00:17:40.659 "peer_address": { 00:17:40.659 "trtype": "TCP", 00:17:40.659 "adrfam": "IPv4", 00:17:40.659 "traddr": "10.0.0.1", 00:17:40.659 "trsvcid": "33962" 00:17:40.659 }, 00:17:40.659 "auth": { 00:17:40.659 "state": "completed", 00:17:40.659 "digest": "sha384", 00:17:40.659 "dhgroup": "ffdhe4096" 00:17:40.659 } 00:17:40.659 } 00:17:40.659 ]' 00:17:40.659 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.917 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.917 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.917 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:40.917 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.917 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.917 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.917 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.173 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGZjN2JmNTEzNjVjMzM3ZWZhYTE3OGM5MjA2NzRjZTBlODk4NWYwZGFlMmNlMGZm3oByUA==: --dhchap-ctrl-secret DHHC-1:01:NGIzNDYzMjZiMDNjNTAwYzdkODgyYjc4MjE2YTQwNmLuH8HX: 00:17:42.107 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.107 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:42.107 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.107 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.107 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.107 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.107 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:42.107 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:42.365 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:42.365 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.365 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:42.365 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:42.365 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:42.365 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.365 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:42.365 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.365 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.365 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.365 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.365 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.930 00:17:42.930 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.930 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.930 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.930 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.930 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.930 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.930 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.930 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.930 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.930 { 00:17:42.930 "cntlid": 79, 00:17:42.930 "qid": 0, 00:17:42.930 "state": "enabled", 00:17:42.930 "thread": "nvmf_tgt_poll_group_000", 00:17:42.930 "listen_address": { 00:17:42.930 "trtype": "TCP", 00:17:42.930 "adrfam": "IPv4", 00:17:42.930 "traddr": "10.0.0.2", 00:17:42.930 "trsvcid": "4420" 00:17:42.930 }, 00:17:42.930 "peer_address": { 00:17:42.930 "trtype": "TCP", 00:17:42.930 "adrfam": "IPv4", 00:17:42.930 "traddr": "10.0.0.1", 00:17:42.930 "trsvcid": "33982" 00:17:42.930 }, 00:17:42.930 "auth": { 00:17:42.930 "state": "completed", 00:17:42.930 "digest": "sha384", 00:17:42.930 "dhgroup": "ffdhe4096" 00:17:42.930 } 00:17:42.930 } 00:17:42.930 ]' 00:17:42.930 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.189 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.189 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.189 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:43.189 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.189 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.189 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.189 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.451 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAwYzg2Y2U5YmVmNzY0M2RjMzc2OTE2MGE4NjM0ZTlmNTI3OTI3ZDhjMzhhNWZiY2U1M2MyYzZlYjM1NjU3ZEjsMjY=: 00:17:44.384 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.384 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:44.384 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.384 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.384 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.384 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.384 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.384 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:44.384 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:44.643 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:44.643 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.643 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:44.643 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:44.643 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:44.643 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.643 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.643 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.643 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.643 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.643 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.643 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.209 00:17:45.209 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.209 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.209 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.467 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.467 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.467 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.467 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.467 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.467 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.467 { 00:17:45.467 "cntlid": 81, 00:17:45.467 "qid": 0, 00:17:45.467 "state": "enabled", 00:17:45.467 "thread": "nvmf_tgt_poll_group_000", 00:17:45.467 "listen_address": { 00:17:45.467 "trtype": "TCP", 00:17:45.467 "adrfam": "IPv4", 00:17:45.467 "traddr": "10.0.0.2", 00:17:45.467 "trsvcid": "4420" 00:17:45.467 }, 00:17:45.467 "peer_address": { 00:17:45.467 "trtype": "TCP", 00:17:45.467 "adrfam": "IPv4", 00:17:45.467 "traddr": "10.0.0.1", 00:17:45.467 "trsvcid": "41898" 00:17:45.467 }, 00:17:45.467 "auth": { 00:17:45.467 "state": "completed", 00:17:45.467 "digest": "sha384", 00:17:45.467 "dhgroup": "ffdhe6144" 00:17:45.467 } 00:17:45.467 } 00:17:45.467 ]' 00:17:45.467 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.467 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:45.467 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.467 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:45.467 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.725 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.725 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.725 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.725 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZDM2ZjE3MGQzMThmNzA3ZGQxNzg4OTZlYjU0Y2FkZGM1NTViODdjMGVmYWI5MTE31jBCEA==: --dhchap-ctrl-secret DHHC-1:03:MDkyYTA5OTYyNGQzZGVlOTRhMDYxMDIxNmRhNjMwZjU3YzI2YjUwMWRlNGY0YTg1ZjA3NGUxZTIzOWFjY2Y0Yb4FMSc=: 00:17:46.658 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.659 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:46.659 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.659 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.916 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.916 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.916 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:46.916 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:46.916 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:46.916 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.916 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:46.916 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:46.916 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:46.916 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.917 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.917 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.917 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.174 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.174 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.174 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.740 00:17:47.740 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.740 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.740 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.998 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.998 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.998 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.998 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.998 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.998 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.998 { 00:17:47.998 "cntlid": 83, 00:17:47.998 "qid": 0, 00:17:47.998 "state": "enabled", 00:17:47.998 "thread": "nvmf_tgt_poll_group_000", 00:17:47.998 "listen_address": { 00:17:47.998 "trtype": "TCP", 00:17:47.998 "adrfam": "IPv4", 00:17:47.998 "traddr": "10.0.0.2", 00:17:47.998 "trsvcid": "4420" 00:17:47.998 }, 00:17:47.998 "peer_address": { 00:17:47.998 "trtype": "TCP", 00:17:47.998 "adrfam": "IPv4", 00:17:47.998 "traddr": "10.0.0.1", 00:17:47.998 "trsvcid": "41922" 00:17:47.998 }, 00:17:47.998 "auth": { 00:17:47.998 "state": "completed", 00:17:47.998 "digest": "sha384", 00:17:47.998 "dhgroup": "ffdhe6144" 00:17:47.998 } 00:17:47.998 } 00:17:47.998 ]' 00:17:47.998 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.998 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:47.998 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.998 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:47.998 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.998 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.998 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.998 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.257 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGU2MzYyMGFhOTE4MmE5M2UyYzRlZTY0YjRhZjNmYmSEPhBf: --dhchap-ctrl-secret DHHC-1:02:YjlmOTE5ZmUyYmI5NTlkNDA3OTY3OTRmZGU0ZWFiNmU2MmZlMWVmNjNmMjQzNDE596lAIA==: 00:17:49.191 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.191 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:49.191 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.191 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.191 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.191 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.191 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:49.191 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:49.448 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:49.448 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.448 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:49.448 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:49.448 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:49.448 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.448 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.448 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.448 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.448 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.448 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.448 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.014 00:17:50.014 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.014 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.014 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.272 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.272 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.272 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.272 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.272 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.272 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.272 { 00:17:50.272 "cntlid": 85, 00:17:50.272 "qid": 0, 00:17:50.272 "state": "enabled", 00:17:50.272 "thread": "nvmf_tgt_poll_group_000", 00:17:50.272 "listen_address": { 00:17:50.272 "trtype": "TCP", 00:17:50.272 "adrfam": "IPv4", 00:17:50.272 "traddr": "10.0.0.2", 00:17:50.272 "trsvcid": "4420" 00:17:50.272 }, 00:17:50.272 "peer_address": { 00:17:50.272 "trtype": "TCP", 00:17:50.272 "adrfam": "IPv4", 00:17:50.272 "traddr": "10.0.0.1", 00:17:50.272 "trsvcid": "41942" 00:17:50.272 }, 00:17:50.272 "auth": { 00:17:50.272 "state": "completed", 00:17:50.272 "digest": "sha384", 00:17:50.272 "dhgroup": "ffdhe6144" 00:17:50.272 } 00:17:50.272 } 00:17:50.272 ]' 00:17:50.272 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.272 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:50.272 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.272 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:50.272 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.531 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.531 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.531 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.789 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGZjN2JmNTEzNjVjMzM3ZWZhYTE3OGM5MjA2NzRjZTBlODk4NWYwZGFlMmNlMGZm3oByUA==: --dhchap-ctrl-secret DHHC-1:01:NGIzNDYzMjZiMDNjNTAwYzdkODgyYjc4MjE2YTQwNmLuH8HX: 00:17:51.721 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.721 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:51.721 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.721 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.721 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.721 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.721 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:51.721 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:51.979 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:51.979 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.979 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:51.979 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:51.979 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:51.979 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.979 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:51.979 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.979 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.979 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.979 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:51.979 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:52.544 00:17:52.544 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.544 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.544 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.802 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.802 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.802 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.802 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.802 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.802 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.802 { 00:17:52.802 "cntlid": 87, 00:17:52.802 "qid": 0, 00:17:52.802 "state": "enabled", 00:17:52.802 "thread": "nvmf_tgt_poll_group_000", 00:17:52.802 "listen_address": { 00:17:52.802 "trtype": "TCP", 00:17:52.802 "adrfam": "IPv4", 00:17:52.802 "traddr": "10.0.0.2", 00:17:52.802 "trsvcid": "4420" 00:17:52.802 }, 00:17:52.802 "peer_address": { 00:17:52.802 "trtype": "TCP", 00:17:52.802 "adrfam": "IPv4", 00:17:52.802 "traddr": "10.0.0.1", 00:17:52.802 "trsvcid": "41960" 00:17:52.802 }, 00:17:52.802 "auth": { 00:17:52.802 "state": "completed", 00:17:52.802 "digest": "sha384", 00:17:52.802 "dhgroup": "ffdhe6144" 00:17:52.802 } 00:17:52.802 } 00:17:52.802 ]' 00:17:52.802 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.802 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:52.802 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.802 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:52.802 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.802 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.802 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.802 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.059 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAwYzg2Y2U5YmVmNzY0M2RjMzc2OTE2MGE4NjM0ZTlmNTI3OTI3ZDhjMzhhNWZiY2U1M2MyYzZlYjM1NjU3ZEjsMjY=: 00:17:53.989 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.989 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:53.989 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.989 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.989 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.989 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:53.989 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.989 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:53.989 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:54.247 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:54.247 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.247 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:54.247 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:54.247 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:54.247 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.247 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.247 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.247 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.247 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.247 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.247 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.179 00:17:55.179 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.179 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.179 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.436 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.436 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.436 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.436 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.436 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.436 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.436 { 00:17:55.436 "cntlid": 89, 00:17:55.436 "qid": 0, 00:17:55.436 "state": "enabled", 00:17:55.436 "thread": "nvmf_tgt_poll_group_000", 00:17:55.436 "listen_address": { 00:17:55.436 "trtype": "TCP", 00:17:55.436 "adrfam": "IPv4", 00:17:55.436 "traddr": "10.0.0.2", 00:17:55.436 "trsvcid": "4420" 00:17:55.436 }, 00:17:55.436 "peer_address": { 00:17:55.436 "trtype": "TCP", 00:17:55.436 "adrfam": "IPv4", 00:17:55.436 "traddr": "10.0.0.1", 00:17:55.436 "trsvcid": "60112" 00:17:55.436 }, 00:17:55.436 "auth": { 00:17:55.436 "state": "completed", 00:17:55.436 "digest": "sha384", 00:17:55.436 "dhgroup": "ffdhe8192" 00:17:55.436 } 00:17:55.436 } 00:17:55.436 ]' 00:17:55.436 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.436 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:55.436 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.436 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:55.436 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.694 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.694 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.694 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.952 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZDM2ZjE3MGQzMThmNzA3ZGQxNzg4OTZlYjU0Y2FkZGM1NTViODdjMGVmYWI5MTE31jBCEA==: --dhchap-ctrl-secret DHHC-1:03:MDkyYTA5OTYyNGQzZGVlOTRhMDYxMDIxNmRhNjMwZjU3YzI2YjUwMWRlNGY0YTg1ZjA3NGUxZTIzOWFjY2Y0Yb4FMSc=: 00:17:56.884 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.884 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:56.884 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.884 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.884 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.884 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.884 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:56.884 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:57.141 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:57.141 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.141 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:57.141 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:57.141 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:57.141 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.141 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.141 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.141 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.141 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.141 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.141 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.074 00:17:58.074 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.074 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.074 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.332 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.332 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.332 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.332 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.332 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.332 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.332 { 00:17:58.332 "cntlid": 91, 00:17:58.332 "qid": 0, 00:17:58.332 "state": "enabled", 00:17:58.332 "thread": "nvmf_tgt_poll_group_000", 00:17:58.332 "listen_address": { 00:17:58.332 "trtype": "TCP", 00:17:58.332 "adrfam": "IPv4", 00:17:58.332 "traddr": "10.0.0.2", 00:17:58.332 "trsvcid": "4420" 00:17:58.332 }, 00:17:58.332 "peer_address": { 00:17:58.332 "trtype": "TCP", 00:17:58.332 "adrfam": "IPv4", 00:17:58.332 "traddr": "10.0.0.1", 00:17:58.332 "trsvcid": "60142" 00:17:58.332 }, 00:17:58.332 "auth": { 00:17:58.332 "state": "completed", 00:17:58.332 "digest": "sha384", 00:17:58.332 "dhgroup": "ffdhe8192" 00:17:58.332 } 00:17:58.332 } 00:17:58.332 ]' 00:17:58.332 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.332 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:58.332 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.332 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:58.332 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.332 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.332 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.332 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.590 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGU2MzYyMGFhOTE4MmE5M2UyYzRlZTY0YjRhZjNmYmSEPhBf: --dhchap-ctrl-secret DHHC-1:02:YjlmOTE5ZmUyYmI5NTlkNDA3OTY3OTRmZGU0ZWFiNmU2MmZlMWVmNjNmMjQzNDE596lAIA==: 00:17:59.523 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.523 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:59.523 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.523 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.523 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.523 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.523 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:59.523 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:59.781 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:59.781 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.781 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:59.781 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:59.781 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:59.781 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.781 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.781 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.781 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.781 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.781 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.781 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.715 00:18:00.715 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.715 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.715 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.972 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.972 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.972 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.972 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.972 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.972 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.972 { 00:18:00.972 "cntlid": 93, 00:18:00.972 "qid": 0, 00:18:00.972 "state": "enabled", 00:18:00.972 "thread": "nvmf_tgt_poll_group_000", 00:18:00.972 "listen_address": { 00:18:00.972 "trtype": "TCP", 00:18:00.972 "adrfam": "IPv4", 00:18:00.972 "traddr": "10.0.0.2", 00:18:00.972 "trsvcid": "4420" 00:18:00.972 }, 00:18:00.972 "peer_address": { 00:18:00.972 "trtype": "TCP", 00:18:00.972 "adrfam": "IPv4", 00:18:00.972 "traddr": "10.0.0.1", 00:18:00.972 "trsvcid": "60166" 00:18:00.972 }, 00:18:00.972 "auth": { 00:18:00.972 "state": "completed", 00:18:00.972 "digest": "sha384", 00:18:00.972 "dhgroup": "ffdhe8192" 00:18:00.972 } 00:18:00.973 } 00:18:00.973 ]' 00:18:00.973 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.973 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:00.973 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.973 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:00.973 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.229 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.229 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.229 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.485 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGZjN2JmNTEzNjVjMzM3ZWZhYTE3OGM5MjA2NzRjZTBlODk4NWYwZGFlMmNlMGZm3oByUA==: --dhchap-ctrl-secret DHHC-1:01:NGIzNDYzMjZiMDNjNTAwYzdkODgyYjc4MjE2YTQwNmLuH8HX: 00:18:02.418 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.418 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:02.418 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.418 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.418 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.418 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.418 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:02.418 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:02.711 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:02.711 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.711 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:02.711 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:02.711 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:02.711 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.711 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:02.711 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.711 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.711 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.711 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:02.711 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.645 00:18:03.645 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.645 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.645 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.903 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.903 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.903 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.903 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.903 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.903 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.903 { 00:18:03.903 "cntlid": 95, 00:18:03.903 "qid": 0, 00:18:03.903 "state": "enabled", 00:18:03.903 "thread": "nvmf_tgt_poll_group_000", 00:18:03.903 "listen_address": { 00:18:03.903 "trtype": "TCP", 00:18:03.903 "adrfam": "IPv4", 00:18:03.903 "traddr": "10.0.0.2", 00:18:03.903 "trsvcid": "4420" 00:18:03.903 }, 00:18:03.903 "peer_address": { 00:18:03.903 "trtype": "TCP", 00:18:03.903 "adrfam": "IPv4", 00:18:03.903 "traddr": "10.0.0.1", 00:18:03.903 "trsvcid": "60176" 00:18:03.903 }, 00:18:03.903 "auth": { 00:18:03.903 "state": "completed", 00:18:03.903 "digest": "sha384", 00:18:03.903 "dhgroup": "ffdhe8192" 00:18:03.903 } 00:18:03.903 } 00:18:03.903 ]' 00:18:03.903 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.903 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:03.903 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.903 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:03.903 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.903 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.903 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.903 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.160 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAwYzg2Y2U5YmVmNzY0M2RjMzc2OTE2MGE4NjM0ZTlmNTI3OTI3ZDhjMzhhNWZiY2U1M2MyYzZlYjM1NjU3ZEjsMjY=: 00:18:05.094 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.094 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:05.094 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.094 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.094 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.094 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:05.094 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.094 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.094 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:05.094 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:05.352 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:05.352 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.352 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:05.352 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:05.352 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:05.352 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.352 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.352 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.352 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.352 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.352 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.352 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.610 00:18:05.868 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.868 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.868 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.125 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.125 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.125 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.125 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.125 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.125 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.125 { 00:18:06.125 "cntlid": 97, 00:18:06.125 "qid": 0, 00:18:06.125 "state": "enabled", 00:18:06.125 "thread": "nvmf_tgt_poll_group_000", 00:18:06.125 "listen_address": { 00:18:06.125 "trtype": "TCP", 00:18:06.125 "adrfam": "IPv4", 00:18:06.125 "traddr": "10.0.0.2", 00:18:06.125 "trsvcid": "4420" 00:18:06.125 }, 00:18:06.125 "peer_address": { 00:18:06.125 "trtype": "TCP", 00:18:06.125 "adrfam": "IPv4", 00:18:06.125 "traddr": "10.0.0.1", 00:18:06.125 "trsvcid": "40236" 00:18:06.125 }, 00:18:06.125 "auth": { 00:18:06.125 "state": "completed", 00:18:06.125 "digest": "sha512", 00:18:06.125 "dhgroup": "null" 00:18:06.125 } 00:18:06.125 } 00:18:06.125 ]' 00:18:06.125 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.125 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.125 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.125 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:06.125 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.125 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.126 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.126 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.383 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZDM2ZjE3MGQzMThmNzA3ZGQxNzg4OTZlYjU0Y2FkZGM1NTViODdjMGVmYWI5MTE31jBCEA==: --dhchap-ctrl-secret DHHC-1:03:MDkyYTA5OTYyNGQzZGVlOTRhMDYxMDIxNmRhNjMwZjU3YzI2YjUwMWRlNGY0YTg1ZjA3NGUxZTIzOWFjY2Y0Yb4FMSc=: 00:18:07.317 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.317 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:07.317 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.317 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.317 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.317 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.317 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:07.317 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:07.575 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:07.575 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.575 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:07.575 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:07.575 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:07.575 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.575 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.575 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.575 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.576 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.576 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.576 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.833 00:18:07.834 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.834 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.834 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.092 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.092 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.092 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.092 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.092 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.092 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.092 { 00:18:08.092 "cntlid": 99, 00:18:08.092 "qid": 0, 00:18:08.092 "state": "enabled", 00:18:08.092 "thread": "nvmf_tgt_poll_group_000", 00:18:08.092 "listen_address": { 00:18:08.092 "trtype": "TCP", 00:18:08.092 "adrfam": "IPv4", 00:18:08.092 "traddr": "10.0.0.2", 00:18:08.092 "trsvcid": "4420" 00:18:08.092 }, 00:18:08.092 "peer_address": { 00:18:08.092 "trtype": "TCP", 00:18:08.092 "adrfam": "IPv4", 00:18:08.092 "traddr": "10.0.0.1", 00:18:08.092 "trsvcid": "40262" 00:18:08.092 }, 00:18:08.092 "auth": { 00:18:08.092 "state": "completed", 00:18:08.092 "digest": "sha512", 00:18:08.092 "dhgroup": "null" 00:18:08.092 } 00:18:08.092 } 00:18:08.092 ]' 00:18:08.092 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.350 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.350 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.350 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:08.350 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.350 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.350 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.350 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.608 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGU2MzYyMGFhOTE4MmE5M2UyYzRlZTY0YjRhZjNmYmSEPhBf: --dhchap-ctrl-secret DHHC-1:02:YjlmOTE5ZmUyYmI5NTlkNDA3OTY3OTRmZGU0ZWFiNmU2MmZlMWVmNjNmMjQzNDE596lAIA==: 00:18:09.540 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.540 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:09.540 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.540 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.540 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.540 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.540 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:09.540 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:09.798 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:18:09.798 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.798 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:09.798 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:09.798 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:09.798 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.798 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.798 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.798 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.798 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.798 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.798 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.056 00:18:10.056 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.056 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.056 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.314 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.314 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.314 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.314 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.314 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.314 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.314 { 00:18:10.314 "cntlid": 101, 00:18:10.314 "qid": 0, 00:18:10.314 "state": "enabled", 00:18:10.314 "thread": "nvmf_tgt_poll_group_000", 00:18:10.314 "listen_address": { 00:18:10.314 "trtype": "TCP", 00:18:10.314 "adrfam": "IPv4", 00:18:10.314 "traddr": "10.0.0.2", 00:18:10.314 "trsvcid": "4420" 00:18:10.314 }, 00:18:10.314 "peer_address": { 00:18:10.314 "trtype": "TCP", 00:18:10.314 "adrfam": "IPv4", 00:18:10.314 "traddr": "10.0.0.1", 00:18:10.314 "trsvcid": "40284" 00:18:10.314 }, 00:18:10.314 "auth": { 00:18:10.314 "state": "completed", 00:18:10.314 "digest": "sha512", 00:18:10.314 "dhgroup": "null" 00:18:10.314 } 00:18:10.314 } 00:18:10.314 ]' 00:18:10.314 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.314 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.314 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.571 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:10.571 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.571 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.571 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.571 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.828 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGZjN2JmNTEzNjVjMzM3ZWZhYTE3OGM5MjA2NzRjZTBlODk4NWYwZGFlMmNlMGZm3oByUA==: --dhchap-ctrl-secret DHHC-1:01:NGIzNDYzMjZiMDNjNTAwYzdkODgyYjc4MjE2YTQwNmLuH8HX: 00:18:11.758 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.758 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:11.758 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.758 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.758 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.758 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.758 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:11.758 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:12.016 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:18:12.016 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.016 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:12.016 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:12.016 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:12.016 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.016 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:12.016 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.016 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.016 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.016 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.016 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.273 00:18:12.273 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.273 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.273 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.531 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.531 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.531 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.531 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.531 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.531 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.531 { 00:18:12.531 "cntlid": 103, 00:18:12.531 "qid": 0, 00:18:12.531 "state": "enabled", 00:18:12.531 "thread": "nvmf_tgt_poll_group_000", 00:18:12.531 "listen_address": { 00:18:12.531 "trtype": "TCP", 00:18:12.531 "adrfam": "IPv4", 00:18:12.531 "traddr": "10.0.0.2", 00:18:12.531 "trsvcid": "4420" 00:18:12.531 }, 00:18:12.531 "peer_address": { 00:18:12.531 "trtype": "TCP", 00:18:12.531 "adrfam": "IPv4", 00:18:12.531 "traddr": "10.0.0.1", 00:18:12.531 "trsvcid": "40326" 00:18:12.531 }, 00:18:12.531 "auth": { 00:18:12.531 "state": "completed", 00:18:12.531 "digest": "sha512", 00:18:12.531 "dhgroup": "null" 00:18:12.531 } 00:18:12.531 } 00:18:12.531 ]' 00:18:12.531 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.531 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.531 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.789 19:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:12.789 19:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.789 19:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.789 19:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.789 19:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.046 19:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAwYzg2Y2U5YmVmNzY0M2RjMzc2OTE2MGE4NjM0ZTlmNTI3OTI3ZDhjMzhhNWZiY2U1M2MyYzZlYjM1NjU3ZEjsMjY=: 00:18:13.980 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.980 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:13.980 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.980 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.980 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.980 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:13.980 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.980 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:13.980 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:14.238 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:18:14.238 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.238 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:14.238 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:14.238 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:14.238 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.238 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.238 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.238 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.238 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.238 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.238 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.496 00:18:14.496 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.496 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.496 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.754 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.754 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.754 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.754 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.754 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.754 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.754 { 00:18:14.754 "cntlid": 105, 00:18:14.754 "qid": 0, 00:18:14.754 "state": "enabled", 00:18:14.754 "thread": "nvmf_tgt_poll_group_000", 00:18:14.754 "listen_address": { 00:18:14.754 "trtype": "TCP", 00:18:14.754 "adrfam": "IPv4", 00:18:14.754 "traddr": "10.0.0.2", 00:18:14.754 "trsvcid": "4420" 00:18:14.754 }, 00:18:14.754 "peer_address": { 00:18:14.754 "trtype": "TCP", 00:18:14.754 "adrfam": "IPv4", 00:18:14.754 "traddr": "10.0.0.1", 00:18:14.754 "trsvcid": "35228" 00:18:14.754 }, 00:18:14.754 "auth": { 00:18:14.754 "state": "completed", 00:18:14.754 "digest": "sha512", 00:18:14.754 "dhgroup": "ffdhe2048" 00:18:14.754 } 00:18:14.754 } 00:18:14.754 ]' 00:18:14.754 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.754 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.754 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.754 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:14.754 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.012 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.013 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.013 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.271 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZDM2ZjE3MGQzMThmNzA3ZGQxNzg4OTZlYjU0Y2FkZGM1NTViODdjMGVmYWI5MTE31jBCEA==: --dhchap-ctrl-secret DHHC-1:03:MDkyYTA5OTYyNGQzZGVlOTRhMDYxMDIxNmRhNjMwZjU3YzI2YjUwMWRlNGY0YTg1ZjA3NGUxZTIzOWFjY2Y0Yb4FMSc=: 00:18:16.203 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.203 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:16.203 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.203 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.203 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.203 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.203 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:16.203 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:16.461 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:18:16.461 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.461 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:16.461 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:16.461 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:16.461 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.461 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.461 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.461 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.461 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.461 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.461 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.718 00:18:16.718 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.718 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.718 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.974 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.974 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.974 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.974 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.974 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.974 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.974 { 00:18:16.974 "cntlid": 107, 00:18:16.974 "qid": 0, 00:18:16.974 "state": "enabled", 00:18:16.975 "thread": "nvmf_tgt_poll_group_000", 00:18:16.975 "listen_address": { 00:18:16.975 "trtype": "TCP", 00:18:16.975 "adrfam": "IPv4", 00:18:16.975 "traddr": "10.0.0.2", 00:18:16.975 "trsvcid": "4420" 00:18:16.975 }, 00:18:16.975 "peer_address": { 00:18:16.975 "trtype": "TCP", 00:18:16.975 "adrfam": "IPv4", 00:18:16.975 "traddr": "10.0.0.1", 00:18:16.975 "trsvcid": "35236" 00:18:16.975 }, 00:18:16.975 "auth": { 00:18:16.975 "state": "completed", 00:18:16.975 "digest": "sha512", 00:18:16.975 "dhgroup": "ffdhe2048" 00:18:16.975 } 00:18:16.975 } 00:18:16.975 ]' 00:18:16.975 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.975 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.975 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.975 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:16.975 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.231 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.232 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.232 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.490 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGU2MzYyMGFhOTE4MmE5M2UyYzRlZTY0YjRhZjNmYmSEPhBf: --dhchap-ctrl-secret DHHC-1:02:YjlmOTE5ZmUyYmI5NTlkNDA3OTY3OTRmZGU0ZWFiNmU2MmZlMWVmNjNmMjQzNDE596lAIA==: 00:18:18.421 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.421 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:18.421 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.421 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.421 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.421 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.421 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:18.421 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:18.679 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:18:18.679 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.679 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:18.679 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:18.679 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:18.679 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.679 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.679 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.679 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.679 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.679 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.679 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.936 00:18:18.936 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.936 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.936 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.194 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.194 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.194 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.194 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.194 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.194 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.194 { 00:18:19.194 "cntlid": 109, 00:18:19.194 "qid": 0, 00:18:19.194 "state": "enabled", 00:18:19.194 "thread": "nvmf_tgt_poll_group_000", 00:18:19.194 "listen_address": { 00:18:19.194 "trtype": "TCP", 00:18:19.194 "adrfam": "IPv4", 00:18:19.194 "traddr": "10.0.0.2", 00:18:19.194 "trsvcid": "4420" 00:18:19.194 }, 00:18:19.194 "peer_address": { 00:18:19.194 "trtype": "TCP", 00:18:19.194 "adrfam": "IPv4", 00:18:19.194 "traddr": "10.0.0.1", 00:18:19.194 "trsvcid": "35268" 00:18:19.194 }, 00:18:19.194 "auth": { 00:18:19.194 "state": "completed", 00:18:19.194 "digest": "sha512", 00:18:19.194 "dhgroup": "ffdhe2048" 00:18:19.194 } 00:18:19.194 } 00:18:19.194 ]' 00:18:19.194 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.194 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.194 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.194 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:19.194 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.194 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.194 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.194 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.452 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGZjN2JmNTEzNjVjMzM3ZWZhYTE3OGM5MjA2NzRjZTBlODk4NWYwZGFlMmNlMGZm3oByUA==: --dhchap-ctrl-secret DHHC-1:01:NGIzNDYzMjZiMDNjNTAwYzdkODgyYjc4MjE2YTQwNmLuH8HX: 00:18:20.385 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.385 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:20.385 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.385 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.385 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.385 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.385 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:20.385 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:20.643 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:18:20.643 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.643 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:20.643 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:20.643 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:20.643 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.643 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:20.643 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.643 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.643 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.643 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.643 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.212 00:18:21.212 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.212 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.212 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.212 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.212 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.212 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.212 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.212 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.212 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.212 { 00:18:21.212 "cntlid": 111, 00:18:21.212 "qid": 0, 00:18:21.212 "state": "enabled", 00:18:21.212 "thread": "nvmf_tgt_poll_group_000", 00:18:21.212 "listen_address": { 00:18:21.212 "trtype": "TCP", 00:18:21.212 "adrfam": "IPv4", 00:18:21.212 "traddr": "10.0.0.2", 00:18:21.212 "trsvcid": "4420" 00:18:21.212 }, 00:18:21.212 "peer_address": { 00:18:21.212 "trtype": "TCP", 00:18:21.212 "adrfam": "IPv4", 00:18:21.212 "traddr": "10.0.0.1", 00:18:21.212 "trsvcid": "35302" 00:18:21.212 }, 00:18:21.212 "auth": { 00:18:21.212 "state": "completed", 00:18:21.212 "digest": "sha512", 00:18:21.212 "dhgroup": "ffdhe2048" 00:18:21.212 } 00:18:21.212 } 00:18:21.212 ]' 00:18:21.212 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.505 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.506 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.506 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:21.506 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.506 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.506 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.506 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.776 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAwYzg2Y2U5YmVmNzY0M2RjMzc2OTE2MGE4NjM0ZTlmNTI3OTI3ZDhjMzhhNWZiY2U1M2MyYzZlYjM1NjU3ZEjsMjY=: 00:18:22.710 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.710 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:22.710 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.710 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.710 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.710 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:22.710 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.710 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:22.710 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:22.968 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:18:22.968 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.968 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:22.968 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:22.968 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:22.968 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.968 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.968 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.968 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.968 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.968 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.968 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.226 00:18:23.226 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.226 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.226 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.484 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.484 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.484 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.484 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.484 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.484 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.484 { 00:18:23.484 "cntlid": 113, 00:18:23.484 "qid": 0, 00:18:23.484 "state": "enabled", 00:18:23.484 "thread": "nvmf_tgt_poll_group_000", 00:18:23.484 "listen_address": { 00:18:23.484 "trtype": "TCP", 00:18:23.484 "adrfam": "IPv4", 00:18:23.484 "traddr": "10.0.0.2", 00:18:23.484 "trsvcid": "4420" 00:18:23.484 }, 00:18:23.484 "peer_address": { 00:18:23.484 "trtype": "TCP", 00:18:23.485 "adrfam": "IPv4", 00:18:23.485 "traddr": "10.0.0.1", 00:18:23.485 "trsvcid": "35332" 00:18:23.485 }, 00:18:23.485 "auth": { 00:18:23.485 "state": "completed", 00:18:23.485 "digest": "sha512", 00:18:23.485 "dhgroup": "ffdhe3072" 00:18:23.485 } 00:18:23.485 } 00:18:23.485 ]' 00:18:23.485 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.485 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.485 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.485 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:23.485 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.742 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.743 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.743 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.000 19:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZDM2ZjE3MGQzMThmNzA3ZGQxNzg4OTZlYjU0Y2FkZGM1NTViODdjMGVmYWI5MTE31jBCEA==: --dhchap-ctrl-secret DHHC-1:03:MDkyYTA5OTYyNGQzZGVlOTRhMDYxMDIxNmRhNjMwZjU3YzI2YjUwMWRlNGY0YTg1ZjA3NGUxZTIzOWFjY2Y0Yb4FMSc=: 00:18:24.933 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.933 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:24.933 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.933 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.933 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.933 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.933 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:24.933 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:25.191 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:25.191 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.191 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:25.191 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:25.192 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:25.192 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.192 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.192 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.192 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.192 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.192 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.192 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.449 00:18:25.449 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.449 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.449 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.707 19:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.707 19:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.707 19:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.707 19:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.707 19:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.707 19:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.707 { 00:18:25.707 "cntlid": 115, 00:18:25.707 "qid": 0, 00:18:25.707 "state": "enabled", 00:18:25.707 "thread": "nvmf_tgt_poll_group_000", 00:18:25.707 "listen_address": { 00:18:25.707 "trtype": "TCP", 00:18:25.707 "adrfam": "IPv4", 00:18:25.707 "traddr": "10.0.0.2", 00:18:25.707 "trsvcid": "4420" 00:18:25.707 }, 00:18:25.707 "peer_address": { 00:18:25.707 "trtype": "TCP", 00:18:25.707 "adrfam": "IPv4", 00:18:25.707 "traddr": "10.0.0.1", 00:18:25.707 "trsvcid": "46852" 00:18:25.707 }, 00:18:25.707 "auth": { 00:18:25.707 "state": "completed", 00:18:25.707 "digest": "sha512", 00:18:25.707 "dhgroup": "ffdhe3072" 00:18:25.707 } 00:18:25.707 } 00:18:25.707 ]' 00:18:25.707 19:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.965 19:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.965 19:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.965 19:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:25.965 19:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.965 19:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.965 19:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.965 19:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.223 19:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGU2MzYyMGFhOTE4MmE5M2UyYzRlZTY0YjRhZjNmYmSEPhBf: --dhchap-ctrl-secret DHHC-1:02:YjlmOTE5ZmUyYmI5NTlkNDA3OTY3OTRmZGU0ZWFiNmU2MmZlMWVmNjNmMjQzNDE596lAIA==: 00:18:27.157 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.157 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:27.157 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.157 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.157 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.157 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.157 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:27.157 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:27.415 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:18:27.415 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.415 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:27.415 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:27.415 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:27.415 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.415 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.415 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.415 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.415 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.415 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.415 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.981 00:18:27.981 19:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.981 19:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.981 19:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.239 19:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.239 19:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.239 19:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.239 19:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.239 19:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.239 19:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.239 { 00:18:28.239 "cntlid": 117, 00:18:28.239 "qid": 0, 00:18:28.239 "state": "enabled", 00:18:28.239 "thread": "nvmf_tgt_poll_group_000", 00:18:28.239 "listen_address": { 00:18:28.239 "trtype": "TCP", 00:18:28.239 "adrfam": "IPv4", 00:18:28.239 "traddr": "10.0.0.2", 00:18:28.239 "trsvcid": "4420" 00:18:28.239 }, 00:18:28.239 "peer_address": { 00:18:28.239 "trtype": "TCP", 00:18:28.239 "adrfam": "IPv4", 00:18:28.239 "traddr": "10.0.0.1", 00:18:28.239 "trsvcid": "46876" 00:18:28.239 }, 00:18:28.239 "auth": { 00:18:28.239 "state": "completed", 00:18:28.239 "digest": "sha512", 00:18:28.239 "dhgroup": "ffdhe3072" 00:18:28.239 } 00:18:28.239 } 00:18:28.239 ]' 00:18:28.239 19:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.239 19:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.239 19:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.239 19:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:28.239 19:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.239 19:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.239 19:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.239 19:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.497 19:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGZjN2JmNTEzNjVjMzM3ZWZhYTE3OGM5MjA2NzRjZTBlODk4NWYwZGFlMmNlMGZm3oByUA==: --dhchap-ctrl-secret DHHC-1:01:NGIzNDYzMjZiMDNjNTAwYzdkODgyYjc4MjE2YTQwNmLuH8HX: 00:18:29.871 19:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.871 19:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:29.871 19:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.871 19:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.871 19:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.871 19:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.871 19:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:29.871 19:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:29.871 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:29.871 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.871 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:29.871 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:29.871 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:29.871 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.871 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:29.871 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.871 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.871 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.871 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:29.871 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.437 00:18:30.437 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.437 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.437 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.437 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.437 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.437 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.437 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.437 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.437 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.437 { 00:18:30.437 "cntlid": 119, 00:18:30.437 "qid": 0, 00:18:30.437 "state": "enabled", 00:18:30.437 "thread": "nvmf_tgt_poll_group_000", 00:18:30.437 "listen_address": { 00:18:30.437 "trtype": "TCP", 00:18:30.437 "adrfam": "IPv4", 00:18:30.437 "traddr": "10.0.0.2", 00:18:30.437 "trsvcid": "4420" 00:18:30.437 }, 00:18:30.437 "peer_address": { 00:18:30.437 "trtype": "TCP", 00:18:30.437 "adrfam": "IPv4", 00:18:30.437 "traddr": "10.0.0.1", 00:18:30.437 "trsvcid": "46902" 00:18:30.437 }, 00:18:30.437 "auth": { 00:18:30.437 "state": "completed", 00:18:30.437 "digest": "sha512", 00:18:30.437 "dhgroup": "ffdhe3072" 00:18:30.437 } 00:18:30.437 } 00:18:30.437 ]' 00:18:30.437 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.695 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.695 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.695 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:30.695 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.695 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.695 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.695 19:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.955 19:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAwYzg2Y2U5YmVmNzY0M2RjMzc2OTE2MGE4NjM0ZTlmNTI3OTI3ZDhjMzhhNWZiY2U1M2MyYzZlYjM1NjU3ZEjsMjY=: 00:18:31.886 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.886 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:31.886 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.886 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.886 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.886 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:31.886 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.886 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:31.886 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:32.143 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:32.143 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.143 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:32.143 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:32.143 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:32.143 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.143 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.143 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.143 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.143 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.143 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.143 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.399 00:18:32.399 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.399 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.399 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.656 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.656 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.656 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.656 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.656 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.656 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.656 { 00:18:32.656 "cntlid": 121, 00:18:32.656 "qid": 0, 00:18:32.656 "state": "enabled", 00:18:32.656 "thread": "nvmf_tgt_poll_group_000", 00:18:32.656 "listen_address": { 00:18:32.656 "trtype": "TCP", 00:18:32.656 "adrfam": "IPv4", 00:18:32.656 "traddr": "10.0.0.2", 00:18:32.656 "trsvcid": "4420" 00:18:32.656 }, 00:18:32.656 "peer_address": { 00:18:32.656 "trtype": "TCP", 00:18:32.656 "adrfam": "IPv4", 00:18:32.656 "traddr": "10.0.0.1", 00:18:32.656 "trsvcid": "46932" 00:18:32.656 }, 00:18:32.656 "auth": { 00:18:32.656 "state": "completed", 00:18:32.656 "digest": "sha512", 00:18:32.656 "dhgroup": "ffdhe4096" 00:18:32.656 } 00:18:32.656 } 00:18:32.656 ]' 00:18:32.656 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.656 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:32.656 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.914 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:32.914 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.914 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.914 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.914 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.171 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZDM2ZjE3MGQzMThmNzA3ZGQxNzg4OTZlYjU0Y2FkZGM1NTViODdjMGVmYWI5MTE31jBCEA==: --dhchap-ctrl-secret DHHC-1:03:MDkyYTA5OTYyNGQzZGVlOTRhMDYxMDIxNmRhNjMwZjU3YzI2YjUwMWRlNGY0YTg1ZjA3NGUxZTIzOWFjY2Y0Yb4FMSc=: 00:18:34.104 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.104 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:34.104 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.104 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.104 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.104 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.104 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:34.104 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:34.362 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:34.362 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.362 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:34.362 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:34.362 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:34.362 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.362 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.362 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.362 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.362 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.362 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.362 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.927 00:18:34.927 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.927 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.927 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.184 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.184 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.184 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.184 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.184 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.184 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.184 { 00:18:35.184 "cntlid": 123, 00:18:35.184 "qid": 0, 00:18:35.184 "state": "enabled", 00:18:35.184 "thread": "nvmf_tgt_poll_group_000", 00:18:35.184 "listen_address": { 00:18:35.184 "trtype": "TCP", 00:18:35.184 "adrfam": "IPv4", 00:18:35.184 "traddr": "10.0.0.2", 00:18:35.184 "trsvcid": "4420" 00:18:35.184 }, 00:18:35.184 "peer_address": { 00:18:35.184 "trtype": "TCP", 00:18:35.184 "adrfam": "IPv4", 00:18:35.184 "traddr": "10.0.0.1", 00:18:35.184 "trsvcid": "59102" 00:18:35.184 }, 00:18:35.184 "auth": { 00:18:35.184 "state": "completed", 00:18:35.184 "digest": "sha512", 00:18:35.184 "dhgroup": "ffdhe4096" 00:18:35.184 } 00:18:35.184 } 00:18:35.184 ]' 00:18:35.184 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.184 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.184 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.184 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:35.184 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.184 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.184 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.184 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.441 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGU2MzYyMGFhOTE4MmE5M2UyYzRlZTY0YjRhZjNmYmSEPhBf: --dhchap-ctrl-secret DHHC-1:02:YjlmOTE5ZmUyYmI5NTlkNDA3OTY3OTRmZGU0ZWFiNmU2MmZlMWVmNjNmMjQzNDE596lAIA==: 00:18:36.373 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.373 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:36.373 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.373 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.373 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.373 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.373 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:36.373 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:36.938 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:36.938 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.938 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:36.938 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:36.938 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:36.938 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.938 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.938 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.938 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.938 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.938 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.938 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.196 00:18:37.196 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.196 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.196 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.453 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.453 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.453 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.453 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.453 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.453 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.454 { 00:18:37.454 "cntlid": 125, 00:18:37.454 "qid": 0, 00:18:37.454 "state": "enabled", 00:18:37.454 "thread": "nvmf_tgt_poll_group_000", 00:18:37.454 "listen_address": { 00:18:37.454 "trtype": "TCP", 00:18:37.454 "adrfam": "IPv4", 00:18:37.454 "traddr": "10.0.0.2", 00:18:37.454 "trsvcid": "4420" 00:18:37.454 }, 00:18:37.454 "peer_address": { 00:18:37.454 "trtype": "TCP", 00:18:37.454 "adrfam": "IPv4", 00:18:37.454 "traddr": "10.0.0.1", 00:18:37.454 "trsvcid": "59126" 00:18:37.454 }, 00:18:37.454 "auth": { 00:18:37.454 "state": "completed", 00:18:37.454 "digest": "sha512", 00:18:37.454 "dhgroup": "ffdhe4096" 00:18:37.454 } 00:18:37.454 } 00:18:37.454 ]' 00:18:37.454 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.454 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:37.454 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.454 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:37.454 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.454 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.454 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.454 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.711 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGZjN2JmNTEzNjVjMzM3ZWZhYTE3OGM5MjA2NzRjZTBlODk4NWYwZGFlMmNlMGZm3oByUA==: --dhchap-ctrl-secret DHHC-1:01:NGIzNDYzMjZiMDNjNTAwYzdkODgyYjc4MjE2YTQwNmLuH8HX: 00:18:39.084 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.084 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:39.084 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.084 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.084 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.084 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.084 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:39.084 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:39.084 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:39.084 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.084 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:39.084 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:39.084 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:39.084 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.084 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:39.084 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.084 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.084 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.084 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:39.084 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:39.342 00:18:39.599 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.599 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.599 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.599 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.599 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.599 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.599 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.599 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.599 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.599 { 00:18:39.599 "cntlid": 127, 00:18:39.599 "qid": 0, 00:18:39.599 "state": "enabled", 00:18:39.599 "thread": "nvmf_tgt_poll_group_000", 00:18:39.599 "listen_address": { 00:18:39.599 "trtype": "TCP", 00:18:39.599 "adrfam": "IPv4", 00:18:39.599 "traddr": "10.0.0.2", 00:18:39.599 "trsvcid": "4420" 00:18:39.599 }, 00:18:39.599 "peer_address": { 00:18:39.599 "trtype": "TCP", 00:18:39.599 "adrfam": "IPv4", 00:18:39.599 "traddr": "10.0.0.1", 00:18:39.599 "trsvcid": "59164" 00:18:39.599 }, 00:18:39.599 "auth": { 00:18:39.599 "state": "completed", 00:18:39.599 "digest": "sha512", 00:18:39.599 "dhgroup": "ffdhe4096" 00:18:39.599 } 00:18:39.599 } 00:18:39.599 ]' 00:18:39.857 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.857 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:39.857 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.857 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:39.857 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.857 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.857 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.857 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.115 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAwYzg2Y2U5YmVmNzY0M2RjMzc2OTE2MGE4NjM0ZTlmNTI3OTI3ZDhjMzhhNWZiY2U1M2MyYzZlYjM1NjU3ZEjsMjY=: 00:18:41.083 19:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.083 19:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:41.083 19:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.083 19:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.083 19:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.083 19:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:41.083 19:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.083 19:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:41.083 19:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:41.339 19:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:41.339 19:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.339 19:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:41.339 19:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:41.339 19:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:41.339 19:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.339 19:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.339 19:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.339 19:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.339 19:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.339 19:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.339 19:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.905 00:18:41.905 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.905 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.905 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.163 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.163 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.163 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.163 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.163 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.163 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.163 { 00:18:42.163 "cntlid": 129, 00:18:42.163 "qid": 0, 00:18:42.163 "state": "enabled", 00:18:42.163 "thread": "nvmf_tgt_poll_group_000", 00:18:42.163 "listen_address": { 00:18:42.163 "trtype": "TCP", 00:18:42.163 "adrfam": "IPv4", 00:18:42.163 "traddr": "10.0.0.2", 00:18:42.163 "trsvcid": "4420" 00:18:42.163 }, 00:18:42.163 "peer_address": { 00:18:42.163 "trtype": "TCP", 00:18:42.163 "adrfam": "IPv4", 00:18:42.163 "traddr": "10.0.0.1", 00:18:42.164 "trsvcid": "59182" 00:18:42.164 }, 00:18:42.164 "auth": { 00:18:42.164 "state": "completed", 00:18:42.164 "digest": "sha512", 00:18:42.164 "dhgroup": "ffdhe6144" 00:18:42.164 } 00:18:42.164 } 00:18:42.164 ]' 00:18:42.164 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.164 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.164 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.164 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:42.164 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.422 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.422 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.422 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.679 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZDM2ZjE3MGQzMThmNzA3ZGQxNzg4OTZlYjU0Y2FkZGM1NTViODdjMGVmYWI5MTE31jBCEA==: --dhchap-ctrl-secret DHHC-1:03:MDkyYTA5OTYyNGQzZGVlOTRhMDYxMDIxNmRhNjMwZjU3YzI2YjUwMWRlNGY0YTg1ZjA3NGUxZTIzOWFjY2Y0Yb4FMSc=: 00:18:43.612 19:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.613 19:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:43.613 19:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.613 19:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.613 19:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.613 19:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.613 19:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:43.613 19:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:43.870 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:43.870 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.870 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:43.870 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:43.870 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:43.870 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.871 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.871 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.871 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.871 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.871 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.871 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.436 00:18:44.436 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.436 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.436 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.694 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.694 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.694 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.695 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.695 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.695 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.695 { 00:18:44.695 "cntlid": 131, 00:18:44.695 "qid": 0, 00:18:44.695 "state": "enabled", 00:18:44.695 "thread": "nvmf_tgt_poll_group_000", 00:18:44.695 "listen_address": { 00:18:44.695 "trtype": "TCP", 00:18:44.695 "adrfam": "IPv4", 00:18:44.695 "traddr": "10.0.0.2", 00:18:44.695 "trsvcid": "4420" 00:18:44.695 }, 00:18:44.695 "peer_address": { 00:18:44.695 "trtype": "TCP", 00:18:44.695 "adrfam": "IPv4", 00:18:44.695 "traddr": "10.0.0.1", 00:18:44.695 "trsvcid": "47246" 00:18:44.695 }, 00:18:44.695 "auth": { 00:18:44.695 "state": "completed", 00:18:44.695 "digest": "sha512", 00:18:44.695 "dhgroup": "ffdhe6144" 00:18:44.695 } 00:18:44.695 } 00:18:44.695 ]' 00:18:44.695 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.695 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:44.695 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.695 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:44.695 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.695 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.695 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.695 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.953 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGU2MzYyMGFhOTE4MmE5M2UyYzRlZTY0YjRhZjNmYmSEPhBf: --dhchap-ctrl-secret DHHC-1:02:YjlmOTE5ZmUyYmI5NTlkNDA3OTY3OTRmZGU0ZWFiNmU2MmZlMWVmNjNmMjQzNDE596lAIA==: 00:18:45.887 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.887 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:45.887 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.887 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.887 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.887 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.887 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:45.887 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:46.453 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:46.453 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.453 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:46.453 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:46.453 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:46.453 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.453 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.453 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.453 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.453 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.453 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.453 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.711 00:18:46.969 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.969 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.969 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.969 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.969 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.969 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.969 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.227 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.228 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.228 { 00:18:47.228 "cntlid": 133, 00:18:47.228 "qid": 0, 00:18:47.228 "state": "enabled", 00:18:47.228 "thread": "nvmf_tgt_poll_group_000", 00:18:47.228 "listen_address": { 00:18:47.228 "trtype": "TCP", 00:18:47.228 "adrfam": "IPv4", 00:18:47.228 "traddr": "10.0.0.2", 00:18:47.228 "trsvcid": "4420" 00:18:47.228 }, 00:18:47.228 "peer_address": { 00:18:47.228 "trtype": "TCP", 00:18:47.228 "adrfam": "IPv4", 00:18:47.228 "traddr": "10.0.0.1", 00:18:47.228 "trsvcid": "47274" 00:18:47.228 }, 00:18:47.228 "auth": { 00:18:47.228 "state": "completed", 00:18:47.228 "digest": "sha512", 00:18:47.228 "dhgroup": "ffdhe6144" 00:18:47.228 } 00:18:47.228 } 00:18:47.228 ]' 00:18:47.228 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.228 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.228 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.228 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:47.228 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.228 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.228 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.228 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.486 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGZjN2JmNTEzNjVjMzM3ZWZhYTE3OGM5MjA2NzRjZTBlODk4NWYwZGFlMmNlMGZm3oByUA==: --dhchap-ctrl-secret DHHC-1:01:NGIzNDYzMjZiMDNjNTAwYzdkODgyYjc4MjE2YTQwNmLuH8HX: 00:18:48.420 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.420 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:48.420 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.420 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.420 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.420 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.420 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:48.420 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:48.677 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:48.677 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.677 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:48.677 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:48.677 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:48.677 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.677 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:48.677 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.677 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.677 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.677 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.677 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.240 00:18:49.240 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.240 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.240 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.498 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.498 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.498 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.498 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.498 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.498 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.498 { 00:18:49.498 "cntlid": 135, 00:18:49.498 "qid": 0, 00:18:49.498 "state": "enabled", 00:18:49.498 "thread": "nvmf_tgt_poll_group_000", 00:18:49.498 "listen_address": { 00:18:49.498 "trtype": "TCP", 00:18:49.498 "adrfam": "IPv4", 00:18:49.498 "traddr": "10.0.0.2", 00:18:49.498 "trsvcid": "4420" 00:18:49.498 }, 00:18:49.498 "peer_address": { 00:18:49.498 "trtype": "TCP", 00:18:49.498 "adrfam": "IPv4", 00:18:49.498 "traddr": "10.0.0.1", 00:18:49.498 "trsvcid": "47294" 00:18:49.498 }, 00:18:49.498 "auth": { 00:18:49.498 "state": "completed", 00:18:49.498 "digest": "sha512", 00:18:49.498 "dhgroup": "ffdhe6144" 00:18:49.498 } 00:18:49.498 } 00:18:49.498 ]' 00:18:49.498 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.756 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.756 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.756 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:49.756 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.756 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.756 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.756 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.014 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAwYzg2Y2U5YmVmNzY0M2RjMzc2OTE2MGE4NjM0ZTlmNTI3OTI3ZDhjMzhhNWZiY2U1M2MyYzZlYjM1NjU3ZEjsMjY=: 00:18:50.947 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.947 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:50.947 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.947 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.947 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.947 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:50.947 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.947 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:50.947 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:51.205 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:51.205 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.205 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:51.205 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:51.205 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:51.205 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.205 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.205 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.205 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.205 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.205 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.205 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.137 00:18:52.137 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.137 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.137 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.395 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.395 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.395 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.395 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.395 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.395 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.395 { 00:18:52.395 "cntlid": 137, 00:18:52.395 "qid": 0, 00:18:52.395 "state": "enabled", 00:18:52.395 "thread": "nvmf_tgt_poll_group_000", 00:18:52.395 "listen_address": { 00:18:52.395 "trtype": "TCP", 00:18:52.395 "adrfam": "IPv4", 00:18:52.395 "traddr": "10.0.0.2", 00:18:52.395 "trsvcid": "4420" 00:18:52.395 }, 00:18:52.395 "peer_address": { 00:18:52.395 "trtype": "TCP", 00:18:52.395 "adrfam": "IPv4", 00:18:52.395 "traddr": "10.0.0.1", 00:18:52.395 "trsvcid": "47312" 00:18:52.395 }, 00:18:52.395 "auth": { 00:18:52.395 "state": "completed", 00:18:52.395 "digest": "sha512", 00:18:52.395 "dhgroup": "ffdhe8192" 00:18:52.395 } 00:18:52.395 } 00:18:52.395 ]' 00:18:52.395 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.395 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.395 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.395 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:52.395 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.395 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.395 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.395 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.653 19:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZDM2ZjE3MGQzMThmNzA3ZGQxNzg4OTZlYjU0Y2FkZGM1NTViODdjMGVmYWI5MTE31jBCEA==: --dhchap-ctrl-secret DHHC-1:03:MDkyYTA5OTYyNGQzZGVlOTRhMDYxMDIxNmRhNjMwZjU3YzI2YjUwMWRlNGY0YTg1ZjA3NGUxZTIzOWFjY2Y0Yb4FMSc=: 00:18:53.588 19:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.588 19:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:53.588 19:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.588 19:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.588 19:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.845 19:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.845 19:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:53.845 19:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:54.103 19:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:54.103 19:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.103 19:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:54.103 19:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:54.103 19:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:54.103 19:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.103 19:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.103 19:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.104 19:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.104 19:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.104 19:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.104 19:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.036 00:18:55.036 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.036 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.036 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.294 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.294 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.294 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.294 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.294 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.294 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.294 { 00:18:55.294 "cntlid": 139, 00:18:55.294 "qid": 0, 00:18:55.294 "state": "enabled", 00:18:55.294 "thread": "nvmf_tgt_poll_group_000", 00:18:55.294 "listen_address": { 00:18:55.294 "trtype": "TCP", 00:18:55.294 "adrfam": "IPv4", 00:18:55.294 "traddr": "10.0.0.2", 00:18:55.294 "trsvcid": "4420" 00:18:55.294 }, 00:18:55.294 "peer_address": { 00:18:55.294 "trtype": "TCP", 00:18:55.294 "adrfam": "IPv4", 00:18:55.294 "traddr": "10.0.0.1", 00:18:55.294 "trsvcid": "54340" 00:18:55.294 }, 00:18:55.294 "auth": { 00:18:55.294 "state": "completed", 00:18:55.294 "digest": "sha512", 00:18:55.294 "dhgroup": "ffdhe8192" 00:18:55.294 } 00:18:55.294 } 00:18:55.294 ]' 00:18:55.294 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.294 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.294 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.294 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:55.294 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.294 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.295 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.295 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.552 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGU2MzYyMGFhOTE4MmE5M2UyYzRlZTY0YjRhZjNmYmSEPhBf: --dhchap-ctrl-secret DHHC-1:02:YjlmOTE5ZmUyYmI5NTlkNDA3OTY3OTRmZGU0ZWFiNmU2MmZlMWVmNjNmMjQzNDE596lAIA==: 00:18:56.486 19:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.486 19:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:56.486 19:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.486 19:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.486 19:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.486 19:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.486 19:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:56.486 19:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:56.744 19:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:56.744 19:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.744 19:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:56.744 19:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:56.744 19:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:56.744 19:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.744 19:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.744 19:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.744 19:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.744 19:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.744 19:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.744 19:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.677 00:18:57.677 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.677 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.677 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.964 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.964 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.964 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.964 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.964 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.964 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.964 { 00:18:57.964 "cntlid": 141, 00:18:57.964 "qid": 0, 00:18:57.964 "state": "enabled", 00:18:57.964 "thread": "nvmf_tgt_poll_group_000", 00:18:57.964 "listen_address": { 00:18:57.964 "trtype": "TCP", 00:18:57.964 "adrfam": "IPv4", 00:18:57.964 "traddr": "10.0.0.2", 00:18:57.964 "trsvcid": "4420" 00:18:57.964 }, 00:18:57.964 "peer_address": { 00:18:57.964 "trtype": "TCP", 00:18:57.964 "adrfam": "IPv4", 00:18:57.964 "traddr": "10.0.0.1", 00:18:57.964 "trsvcid": "54368" 00:18:57.964 }, 00:18:57.964 "auth": { 00:18:57.964 "state": "completed", 00:18:57.964 "digest": "sha512", 00:18:57.964 "dhgroup": "ffdhe8192" 00:18:57.964 } 00:18:57.964 } 00:18:57.964 ]' 00:18:57.964 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.964 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.964 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.964 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:57.964 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.283 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.283 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.283 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.283 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGZjN2JmNTEzNjVjMzM3ZWZhYTE3OGM5MjA2NzRjZTBlODk4NWYwZGFlMmNlMGZm3oByUA==: --dhchap-ctrl-secret DHHC-1:01:NGIzNDYzMjZiMDNjNTAwYzdkODgyYjc4MjE2YTQwNmLuH8HX: 00:18:59.218 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.218 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:59.218 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.218 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.218 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.218 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.218 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:59.218 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:59.784 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:59.784 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.784 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:59.784 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:59.784 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:59.784 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.784 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:59.784 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.784 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.784 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.784 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.784 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:00.716 00:19:00.716 19:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.716 19:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.716 19:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.716 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.716 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.716 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.716 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.716 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.716 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.716 { 00:19:00.716 "cntlid": 143, 00:19:00.716 "qid": 0, 00:19:00.716 "state": "enabled", 00:19:00.716 "thread": "nvmf_tgt_poll_group_000", 00:19:00.716 "listen_address": { 00:19:00.716 "trtype": "TCP", 00:19:00.716 "adrfam": "IPv4", 00:19:00.716 "traddr": "10.0.0.2", 00:19:00.716 "trsvcid": "4420" 00:19:00.716 }, 00:19:00.716 "peer_address": { 00:19:00.716 "trtype": "TCP", 00:19:00.716 "adrfam": "IPv4", 00:19:00.716 "traddr": "10.0.0.1", 00:19:00.716 "trsvcid": "54394" 00:19:00.716 }, 00:19:00.716 "auth": { 00:19:00.716 "state": "completed", 00:19:00.716 "digest": "sha512", 00:19:00.716 "dhgroup": "ffdhe8192" 00:19:00.716 } 00:19:00.716 } 00:19:00.716 ]' 00:19:00.974 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.974 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.974 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.974 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:00.974 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.974 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.974 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.974 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.232 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAwYzg2Y2U5YmVmNzY0M2RjMzc2OTE2MGE4NjM0ZTlmNTI3OTI3ZDhjMzhhNWZiY2U1M2MyYzZlYjM1NjU3ZEjsMjY=: 00:19:02.163 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.163 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:02.163 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.163 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.163 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.163 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:02.163 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:02.163 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:02.163 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:02.163 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:02.163 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:02.420 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:02.420 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.420 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:02.420 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:02.420 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:02.420 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.420 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.420 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.420 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.420 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.420 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.420 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.352 00:19:03.352 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.352 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.352 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.610 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.610 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.610 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.610 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.610 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.610 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.610 { 00:19:03.610 "cntlid": 145, 00:19:03.610 "qid": 0, 00:19:03.610 "state": "enabled", 00:19:03.610 "thread": "nvmf_tgt_poll_group_000", 00:19:03.610 "listen_address": { 00:19:03.610 "trtype": "TCP", 00:19:03.610 "adrfam": "IPv4", 00:19:03.610 "traddr": "10.0.0.2", 00:19:03.610 "trsvcid": "4420" 00:19:03.610 }, 00:19:03.610 "peer_address": { 00:19:03.610 "trtype": "TCP", 00:19:03.610 "adrfam": "IPv4", 00:19:03.610 "traddr": "10.0.0.1", 00:19:03.610 "trsvcid": "54414" 00:19:03.610 }, 00:19:03.610 "auth": { 00:19:03.610 "state": "completed", 00:19:03.610 "digest": "sha512", 00:19:03.610 "dhgroup": "ffdhe8192" 00:19:03.610 } 00:19:03.610 } 00:19:03.610 ]' 00:19:03.610 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.610 19:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.610 19:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.610 19:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:03.610 19:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.867 19:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.867 19:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.868 19:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.124 19:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZDM2ZjE3MGQzMThmNzA3ZGQxNzg4OTZlYjU0Y2FkZGM1NTViODdjMGVmYWI5MTE31jBCEA==: --dhchap-ctrl-secret DHHC-1:03:MDkyYTA5OTYyNGQzZGVlOTRhMDYxMDIxNmRhNjMwZjU3YzI2YjUwMWRlNGY0YTg1ZjA3NGUxZTIzOWFjY2Y0Yb4FMSc=: 00:19:05.057 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.057 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:05.057 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.057 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.057 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.057 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:19:05.057 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.057 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.057 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.057 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:05.057 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:05.058 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:05.058 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:05.058 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:05.058 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:05.058 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:05.058 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:05.058 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:05.989 request: 00:19:05.989 { 00:19:05.989 "name": "nvme0", 00:19:05.989 "trtype": "tcp", 00:19:05.989 "traddr": "10.0.0.2", 00:19:05.989 "adrfam": "ipv4", 00:19:05.989 "trsvcid": "4420", 00:19:05.989 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:05.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:05.989 "prchk_reftag": false, 00:19:05.989 "prchk_guard": false, 00:19:05.989 "hdgst": false, 00:19:05.989 "ddgst": false, 00:19:05.989 "dhchap_key": "key2", 00:19:05.989 "method": "bdev_nvme_attach_controller", 00:19:05.989 "req_id": 1 00:19:05.989 } 00:19:05.989 Got JSON-RPC error response 00:19:05.989 response: 00:19:05.989 { 00:19:05.989 "code": -5, 00:19:05.989 "message": "Input/output error" 00:19:05.989 } 00:19:05.989 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:05.989 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:05.989 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:05.989 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:05.989 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:05.989 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.989 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.989 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.989 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.989 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.989 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.989 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.989 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:05.989 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:05.989 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:05.989 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:05.989 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:05.989 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:05.989 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:05.989 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:05.989 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:06.922 request: 00:19:06.922 { 00:19:06.922 "name": "nvme0", 00:19:06.922 "trtype": "tcp", 00:19:06.922 "traddr": "10.0.0.2", 00:19:06.922 "adrfam": "ipv4", 00:19:06.922 "trsvcid": "4420", 00:19:06.922 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:06.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:06.922 "prchk_reftag": false, 00:19:06.922 "prchk_guard": false, 00:19:06.922 "hdgst": false, 00:19:06.922 "ddgst": false, 00:19:06.922 "dhchap_key": "key1", 00:19:06.922 "dhchap_ctrlr_key": "ckey2", 00:19:06.922 "method": "bdev_nvme_attach_controller", 00:19:06.922 "req_id": 1 00:19:06.922 } 00:19:06.922 Got JSON-RPC error response 00:19:06.922 response: 00:19:06.922 { 00:19:06.922 "code": -5, 00:19:06.922 "message": "Input/output error" 00:19:06.922 } 00:19:06.922 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:06.922 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:06.922 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:06.922 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:06.922 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:06.922 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.922 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.922 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.922 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:19:06.922 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.922 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.922 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.922 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.923 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:06.923 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.923 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:06.923 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:06.923 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:06.923 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:06.923 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.923 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.857 request: 00:19:07.857 { 00:19:07.857 "name": "nvme0", 00:19:07.857 "trtype": "tcp", 00:19:07.857 "traddr": "10.0.0.2", 00:19:07.857 "adrfam": "ipv4", 00:19:07.857 "trsvcid": "4420", 00:19:07.857 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:07.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:07.857 "prchk_reftag": false, 00:19:07.857 "prchk_guard": false, 00:19:07.857 "hdgst": false, 00:19:07.857 "ddgst": false, 00:19:07.857 "dhchap_key": "key1", 00:19:07.857 "dhchap_ctrlr_key": "ckey1", 00:19:07.857 "method": "bdev_nvme_attach_controller", 00:19:07.857 "req_id": 1 00:19:07.857 } 00:19:07.857 Got JSON-RPC error response 00:19:07.857 response: 00:19:07.857 { 00:19:07.857 "code": -5, 00:19:07.857 "message": "Input/output error" 00:19:07.857 } 00:19:07.857 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:07.857 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:07.857 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:07.857 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:07.857 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:07.857 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.857 19:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.857 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.857 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 894044 00:19:07.857 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 894044 ']' 00:19:07.857 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 894044 00:19:07.857 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:19:07.857 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:07.857 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 894044 00:19:07.857 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:07.857 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:07.857 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 894044' 00:19:07.857 killing process with pid 894044 00:19:07.857 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 894044 00:19:07.857 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 894044 00:19:07.857 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:07.857 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:07.857 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:07.857 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.857 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=916817 00:19:07.857 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:07.857 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 916817 00:19:07.857 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 916817 ']' 00:19:07.857 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.857 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:07.857 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.857 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:07.857 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.230 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:09.230 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:09.230 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:09.230 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:09.230 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.230 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.230 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:09.230 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 916817 00:19:09.230 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 916817 ']' 00:19:09.230 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.230 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:09.230 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.230 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:09.230 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.230 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:09.230 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:09.230 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:09.230 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.230 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.487 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.487 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:09.487 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.487 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:09.487 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:09.487 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:09.487 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.487 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:09.487 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.487 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.487 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.487 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:09.487 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:10.416 00:19:10.416 19:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.416 19:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.416 19:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.672 19:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.672 19:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.672 19:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.672 19:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.672 19:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.672 19:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.672 { 00:19:10.672 "cntlid": 1, 00:19:10.672 "qid": 0, 00:19:10.672 "state": "enabled", 00:19:10.672 "thread": "nvmf_tgt_poll_group_000", 00:19:10.672 "listen_address": { 00:19:10.672 "trtype": "TCP", 00:19:10.672 "adrfam": "IPv4", 00:19:10.672 "traddr": "10.0.0.2", 00:19:10.672 "trsvcid": "4420" 00:19:10.672 }, 00:19:10.672 "peer_address": { 00:19:10.672 "trtype": "TCP", 00:19:10.672 "adrfam": "IPv4", 00:19:10.672 "traddr": "10.0.0.1", 00:19:10.672 "trsvcid": "45514" 00:19:10.672 }, 00:19:10.672 "auth": { 00:19:10.672 "state": "completed", 00:19:10.672 "digest": "sha512", 00:19:10.672 "dhgroup": "ffdhe8192" 00:19:10.672 } 00:19:10.672 } 00:19:10.672 ]' 00:19:10.672 19:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.672 19:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.672 19:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.672 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:10.672 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.672 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.672 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.672 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.929 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAwYzg2Y2U5YmVmNzY0M2RjMzc2OTE2MGE4NjM0ZTlmNTI3OTI3ZDhjMzhhNWZiY2U1M2MyYzZlYjM1NjU3ZEjsMjY=: 00:19:11.860 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.860 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:11.860 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.860 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.860 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.860 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:11.860 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.860 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.860 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.860 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:11.860 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:12.117 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.117 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:12.117 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.117 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:12.117 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:12.117 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:12.117 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:12.117 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.118 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.375 request: 00:19:12.375 { 00:19:12.375 "name": "nvme0", 00:19:12.375 "trtype": "tcp", 00:19:12.375 "traddr": "10.0.0.2", 00:19:12.375 "adrfam": "ipv4", 00:19:12.375 "trsvcid": "4420", 00:19:12.375 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:12.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:12.375 "prchk_reftag": false, 00:19:12.375 "prchk_guard": false, 00:19:12.375 "hdgst": false, 00:19:12.375 "ddgst": false, 00:19:12.375 "dhchap_key": "key3", 00:19:12.375 "method": "bdev_nvme_attach_controller", 00:19:12.375 "req_id": 1 00:19:12.375 } 00:19:12.375 Got JSON-RPC error response 00:19:12.375 response: 00:19:12.375 { 00:19:12.375 "code": -5, 00:19:12.375 "message": "Input/output error" 00:19:12.375 } 00:19:12.375 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:12.375 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:12.375 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:12.375 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:12.375 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:12.375 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:12.375 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:12.375 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:12.633 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.633 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:12.633 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.633 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:12.633 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:12.633 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:12.633 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:12.633 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.633 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.890 request: 00:19:12.890 { 00:19:12.890 "name": "nvme0", 00:19:12.890 "trtype": "tcp", 00:19:12.890 "traddr": "10.0.0.2", 00:19:12.890 "adrfam": "ipv4", 00:19:12.890 "trsvcid": "4420", 00:19:12.890 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:12.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:12.890 "prchk_reftag": false, 00:19:12.890 "prchk_guard": false, 00:19:12.890 "hdgst": false, 00:19:12.890 "ddgst": false, 00:19:12.890 "dhchap_key": "key3", 00:19:12.890 "method": "bdev_nvme_attach_controller", 00:19:12.890 "req_id": 1 00:19:12.890 } 00:19:12.890 Got JSON-RPC error response 00:19:12.890 response: 00:19:12.890 { 00:19:12.890 "code": -5, 00:19:12.890 "message": "Input/output error" 00:19:12.890 } 00:19:12.890 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:12.890 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:12.890 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:12.891 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:12.891 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:12.891 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:12.891 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:12.891 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:12.891 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:12.891 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:13.148 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:13.148 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.148 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.148 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.148 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:13.148 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.148 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.148 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.148 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:13.148 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:13.148 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:13.148 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:13.148 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.148 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:13.148 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.148 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:13.149 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:13.406 request: 00:19:13.406 { 00:19:13.406 "name": "nvme0", 00:19:13.406 "trtype": "tcp", 00:19:13.406 "traddr": "10.0.0.2", 00:19:13.406 "adrfam": "ipv4", 00:19:13.406 "trsvcid": "4420", 00:19:13.406 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:13.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:13.406 "prchk_reftag": false, 00:19:13.406 "prchk_guard": false, 00:19:13.406 "hdgst": false, 00:19:13.406 "ddgst": false, 00:19:13.406 "dhchap_key": "key0", 00:19:13.406 "dhchap_ctrlr_key": "key1", 00:19:13.406 "method": "bdev_nvme_attach_controller", 00:19:13.406 "req_id": 1 00:19:13.406 } 00:19:13.406 Got JSON-RPC error response 00:19:13.406 response: 00:19:13.406 { 00:19:13.406 "code": -5, 00:19:13.406 "message": "Input/output error" 00:19:13.406 } 00:19:13.406 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:13.406 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:13.406 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:13.406 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:13.406 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:13.406 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:13.703 00:19:13.703 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:13.703 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:13.703 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.960 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.960 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.960 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.218 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:14.218 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:14.218 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 894199 00:19:14.218 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 894199 ']' 00:19:14.218 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 894199 00:19:14.218 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:19:14.218 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:14.218 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 894199 00:19:14.218 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:14.218 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:14.218 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 894199' 00:19:14.218 killing process with pid 894199 00:19:14.218 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 894199 00:19:14.218 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 894199 00:19:14.784 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:14.784 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:14.784 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:14.784 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:14.784 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:14.784 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:14.784 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:14.784 rmmod nvme_tcp 00:19:14.784 rmmod nvme_fabrics 00:19:14.784 rmmod nvme_keyring 00:19:14.784 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:14.784 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:14.784 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:14.784 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 916817 ']' 00:19:14.784 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 916817 00:19:14.784 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 916817 ']' 00:19:14.784 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 916817 00:19:14.784 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:19:14.784 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:14.784 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 916817 00:19:14.784 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:14.784 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:14.784 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 916817' 00:19:14.784 killing process with pid 916817 00:19:14.784 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 916817 00:19:14.784 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 916817 00:19:15.041 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:15.041 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:15.041 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:15.041 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:15.041 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:15.041 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.041 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.041 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.OVA /tmp/spdk.key-sha256.yfV /tmp/spdk.key-sha384.oQf /tmp/spdk.key-sha512.90d /tmp/spdk.key-sha512.m0M /tmp/spdk.key-sha384.vY7 /tmp/spdk.key-sha256.BGE '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:17.604 00:19:17.604 real 3m11.767s 00:19:17.604 user 7m24.560s 00:19:17.604 sys 0m25.433s 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.604 ************************************ 00:19:17.604 END TEST nvmf_auth_target 00:19:17.604 ************************************ 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:17.604 ************************************ 00:19:17.604 START TEST nvmf_bdevio_no_huge 00:19:17.604 ************************************ 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:17.604 * Looking for test storage... 00:19:17.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.604 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:17.605 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:20.142 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:20.142 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:19:20.142 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:20.142 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:20.142 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:20.142 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:20.142 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:20.142 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:19:20.142 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:20.142 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:19:20.142 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:19:20.142 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:19:20.142 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:19:20.142 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:20.143 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:20.143 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:20.143 Found net devices under 0000:09:00.0: cvl_0_0 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:20.143 Found net devices under 0000:09:00.1: cvl_0_1 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:20.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:20.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:19:20.143 00:19:20.143 --- 10.0.0.2 ping statistics --- 00:19:20.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.143 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:20.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:20.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:19:20.143 00:19:20.143 --- 10.0.0.1 ping statistics --- 00:19:20.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.143 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=919925 00:19:20.143 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:20.144 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 919925 00:19:20.144 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 919925 ']' 00:19:20.144 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.144 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:20.144 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.144 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:20.144 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:20.144 [2024-07-25 19:10:12.333424] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:20.144 [2024-07-25 19:10:12.333505] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:20.144 [2024-07-25 19:10:12.419708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:20.144 [2024-07-25 19:10:12.528788] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.144 [2024-07-25 19:10:12.528842] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.144 [2024-07-25 19:10:12.528856] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.144 [2024-07-25 19:10:12.528869] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.144 [2024-07-25 19:10:12.528879] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.144 [2024-07-25 19:10:12.528930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:20.144 [2024-07-25 19:10:12.528989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:20.144 [2024-07-25 19:10:12.529053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:20.144 [2024-07-25 19:10:12.529056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.077 [2024-07-25 19:10:13.316130] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.077 Malloc0 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.077 [2024-07-25 19:10:13.354117] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:21.077 { 00:19:21.077 "params": { 00:19:21.077 "name": "Nvme$subsystem", 00:19:21.077 "trtype": "$TEST_TRANSPORT", 00:19:21.077 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:21.077 "adrfam": "ipv4", 00:19:21.077 "trsvcid": "$NVMF_PORT", 00:19:21.077 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:21.077 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:21.077 "hdgst": ${hdgst:-false}, 00:19:21.077 "ddgst": ${ddgst:-false} 00:19:21.077 }, 00:19:21.077 "method": "bdev_nvme_attach_controller" 00:19:21.077 } 00:19:21.077 EOF 00:19:21.077 )") 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:19:21.077 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:21.077 "params": { 00:19:21.077 "name": "Nvme1", 00:19:21.077 "trtype": "tcp", 00:19:21.077 "traddr": "10.0.0.2", 00:19:21.077 "adrfam": "ipv4", 00:19:21.077 "trsvcid": "4420", 00:19:21.077 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.077 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:21.077 "hdgst": false, 00:19:21.077 "ddgst": false 00:19:21.077 }, 00:19:21.077 "method": "bdev_nvme_attach_controller" 00:19:21.077 }' 00:19:21.077 [2024-07-25 19:10:13.398696] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:21.077 [2024-07-25 19:10:13.398787] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid920081 ] 00:19:21.077 [2024-07-25 19:10:13.470723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:21.335 [2024-07-25 19:10:13.582834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.335 [2024-07-25 19:10:13.582883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.335 [2024-07-25 19:10:13.582886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.335 I/O targets: 00:19:21.335 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:21.335 00:19:21.335 00:19:21.335 CUnit - A unit testing framework for C - Version 2.1-3 00:19:21.335 http://cunit.sourceforge.net/ 00:19:21.335 00:19:21.335 00:19:21.335 Suite: bdevio tests on: Nvme1n1 00:19:21.626 Test: blockdev write read block ...passed 00:19:21.626 Test: blockdev write zeroes read block ...passed 00:19:21.626 Test: blockdev write zeroes read no split ...passed 00:19:21.626 Test: blockdev write zeroes read split ...passed 00:19:21.626 Test: blockdev write zeroes read split partial ...passed 00:19:21.626 Test: blockdev reset ...[2024-07-25 19:10:14.000610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:21.626 [2024-07-25 19:10:14.000721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56fb0 (9): Bad file descriptor 00:19:21.626 [2024-07-25 19:10:14.018699] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:21.626 passed 00:19:21.626 Test: blockdev write read 8 blocks ...passed 00:19:21.626 Test: blockdev write read size > 128k ...passed 00:19:21.626 Test: blockdev write read invalid size ...passed 00:19:21.626 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:21.626 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:21.626 Test: blockdev write read max offset ...passed 00:19:21.884 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:21.884 Test: blockdev writev readv 8 blocks ...passed 00:19:21.884 Test: blockdev writev readv 30 x 1block ...passed 00:19:21.884 Test: blockdev writev readv block ...passed 00:19:21.884 Test: blockdev writev readv size > 128k ...passed 00:19:21.884 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:21.884 Test: blockdev comparev and writev ...[2024-07-25 19:10:14.236854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.884 [2024-07-25 19:10:14.236895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:21.884 [2024-07-25 19:10:14.236921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.884 [2024-07-25 19:10:14.236938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.884 [2024-07-25 19:10:14.237329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.884 [2024-07-25 19:10:14.237355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:21.884 [2024-07-25 19:10:14.237377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.884 [2024-07-25 19:10:14.237393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:21.884 [2024-07-25 19:10:14.237765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.884 [2024-07-25 19:10:14.237790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:21.884 [2024-07-25 19:10:14.237812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.884 [2024-07-25 19:10:14.237828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:21.884 [2024-07-25 19:10:14.238209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.884 [2024-07-25 19:10:14.238234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:21.884 [2024-07-25 19:10:14.238255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.884 [2024-07-25 19:10:14.238271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:21.884 passed 00:19:21.884 Test: blockdev nvme passthru rw ...passed 00:19:21.884 Test: blockdev nvme passthru vendor specific ...[2024-07-25 19:10:14.322457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:21.884 [2024-07-25 19:10:14.322487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:21.884 [2024-07-25 19:10:14.322688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:21.884 [2024-07-25 19:10:14.322718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:21.884 [2024-07-25 19:10:14.322918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:21.884 [2024-07-25 19:10:14.322941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:21.884 [2024-07-25 19:10:14.323144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:21.884 [2024-07-25 19:10:14.323169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:21.884 passed 00:19:21.884 Test: blockdev nvme admin passthru ...passed 00:19:22.142 Test: blockdev copy ...passed 00:19:22.142 00:19:22.142 Run Summary: Type Total Ran Passed Failed Inactive 00:19:22.142 suites 1 1 n/a 0 0 00:19:22.142 tests 23 23 23 0 0 00:19:22.142 asserts 152 152 152 0 n/a 00:19:22.142 00:19:22.142 Elapsed time = 1.204 seconds 00:19:22.400 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:22.400 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.400 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:22.400 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.400 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:22.400 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:22.400 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:22.400 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:19:22.400 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:22.400 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:19:22.400 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:22.400 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:22.400 rmmod nvme_tcp 00:19:22.400 rmmod nvme_fabrics 00:19:22.400 rmmod nvme_keyring 00:19:22.400 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:22.400 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:19:22.400 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:19:22.401 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 919925 ']' 00:19:22.401 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 919925 00:19:22.401 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 919925 ']' 00:19:22.401 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 919925 00:19:22.401 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:19:22.401 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:22.401 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 919925 00:19:22.401 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:19:22.401 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:19:22.401 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 919925' 00:19:22.401 killing process with pid 919925 00:19:22.401 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 919925 00:19:22.401 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 919925 00:19:22.967 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:22.967 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:22.967 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:22.967 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:22.967 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:22.967 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.967 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:22.967 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.874 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:24.874 00:19:24.874 real 0m7.718s 00:19:24.874 user 0m13.697s 00:19:24.874 sys 0m2.872s 00:19:24.874 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:24.874 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.874 ************************************ 00:19:24.874 END TEST nvmf_bdevio_no_huge 00:19:24.874 ************************************ 00:19:24.874 19:10:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:24.874 19:10:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:24.874 19:10:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:24.874 19:10:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:24.874 ************************************ 00:19:24.874 START TEST nvmf_tls 00:19:24.874 ************************************ 00:19:24.874 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:25.133 * Looking for test storage... 00:19:25.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:19:25.133 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.665 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:27.665 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:19:27.665 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:27.665 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:27.665 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:27.665 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:27.665 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:27.665 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:19:27.665 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:27.665 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:19:27.665 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:19:27.665 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:19:27.665 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:19:27.665 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:19:27.665 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:19:27.665 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:27.665 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:27.666 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:27.666 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:27.666 Found net devices under 0000:09:00.0: cvl_0_0 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:27.666 Found net devices under 0000:09:00.1: cvl_0_1 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:27.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:27.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:19:27.666 00:19:27.666 --- 10.0.0.2 ping statistics --- 00:19:27.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.666 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:27.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:27.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:19:27.666 00:19:27.666 --- 10.0.0.1 ping statistics --- 00:19:27.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.666 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:27.666 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:27.666 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:27.666 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:27.666 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:27.666 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.666 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=922563 00:19:27.667 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:27.667 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 922563 00:19:27.667 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 922563 ']' 00:19:27.667 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.667 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:27.667 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.667 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:27.667 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.667 [2024-07-25 19:10:20.057881] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:27.667 [2024-07-25 19:10:20.057983] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.667 EAL: No free 2048 kB hugepages reported on node 1 00:19:27.924 [2024-07-25 19:10:20.141903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.924 [2024-07-25 19:10:20.258676] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.924 [2024-07-25 19:10:20.258741] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.924 [2024-07-25 19:10:20.258757] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.924 [2024-07-25 19:10:20.258771] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.924 [2024-07-25 19:10:20.258783] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.925 [2024-07-25 19:10:20.258813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.859 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:28.859 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:28.859 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:28.859 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:28.859 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.859 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.859 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:28.860 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:28.860 true 00:19:28.860 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:28.860 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:19:29.118 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:19:29.118 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:29.118 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:29.376 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:29.376 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:19:29.634 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:19:29.634 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:19:29.634 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:29.892 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:29.892 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:19:30.150 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:19:30.150 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:30.150 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:30.150 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:30.408 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:19:30.408 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:30.408 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:30.666 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:30.666 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:30.923 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:19:30.923 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:30.924 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:31.181 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:31.181 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:19:31.438 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:19:31.438 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:19:31.438 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:31.438 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:31.439 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:31.439 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:31.439 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:19:31.439 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:31.439 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:31.439 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:31.439 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:31.439 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:31.439 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:31.439 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:31.439 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:19:31.439 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:31.439 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:31.697 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:31.697 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:19:31.697 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.NSXkD1BzuU 00:19:31.697 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:31.697 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.dwGgM6HznL 00:19:31.697 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:31.697 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:31.697 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.NSXkD1BzuU 00:19:31.697 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.dwGgM6HznL 00:19:31.697 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:31.955 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:32.213 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.NSXkD1BzuU 00:19:32.213 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.NSXkD1BzuU 00:19:32.213 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:32.472 [2024-07-25 19:10:24.790126] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.472 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:32.730 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:32.988 [2024-07-25 19:10:25.295574] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:32.988 [2024-07-25 19:10:25.295799] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:32.988 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:33.247 malloc0 00:19:33.247 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:33.505 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NSXkD1BzuU 00:19:33.763 [2024-07-25 19:10:26.054027] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:33.763 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.NSXkD1BzuU 00:19:33.763 EAL: No free 2048 kB hugepages reported on node 1 00:19:43.748 Initializing NVMe Controllers 00:19:43.748 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:43.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:43.748 Initialization complete. Launching workers. 00:19:43.748 ======================================================== 00:19:43.748 Latency(us) 00:19:43.748 Device Information : IOPS MiB/s Average min max 00:19:43.748 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7635.89 29.83 8383.89 1263.73 9560.18 00:19:43.748 ======================================================== 00:19:43.748 Total : 7635.89 29.83 8383.89 1263.73 9560.18 00:19:43.748 00:19:43.748 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NSXkD1BzuU 00:19:43.748 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:43.748 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:43.748 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:43.748 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NSXkD1BzuU' 00:19:43.748 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:43.748 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=924470 00:19:43.748 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:43.748 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:43.748 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 924470 /var/tmp/bdevperf.sock 00:19:43.748 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 924470 ']' 00:19:43.748 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:43.748 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:43.748 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:43.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:43.748 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:43.748 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.006 [2024-07-25 19:10:36.222830] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:44.006 [2024-07-25 19:10:36.222921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid924470 ] 00:19:44.006 EAL: No free 2048 kB hugepages reported on node 1 00:19:44.006 [2024-07-25 19:10:36.290258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.006 [2024-07-25 19:10:36.399406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:44.263 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:44.263 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:44.263 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NSXkD1BzuU 00:19:44.520 [2024-07-25 19:10:36.760175] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:44.520 [2024-07-25 19:10:36.760298] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:44.520 TLSTESTn1 00:19:44.520 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:44.520 Running I/O for 10 seconds... 00:19:56.720 00:19:56.720 Latency(us) 00:19:56.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.720 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:56.720 Verification LBA range: start 0x0 length 0x2000 00:19:56.720 TLSTESTn1 : 10.06 2090.00 8.16 0.00 0.00 61070.18 5606.97 94760.20 00:19:56.720 =================================================================================================================== 00:19:56.720 Total : 2090.00 8.16 0.00 0.00 61070.18 5606.97 94760.20 00:19:56.720 0 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 924470 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 924470 ']' 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 924470 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 924470 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 924470' 00:19:56.720 killing process with pid 924470 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 924470 00:19:56.720 Received shutdown signal, test time was about 10.000000 seconds 00:19:56.720 00:19:56.720 Latency(us) 00:19:56.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.720 =================================================================================================================== 00:19:56.720 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:56.720 [2024-07-25 19:10:47.098908] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 924470 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dwGgM6HznL 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dwGgM6HznL 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dwGgM6HznL 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.dwGgM6HznL' 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=925779 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 925779 /var/tmp/bdevperf.sock 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 925779 ']' 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:56.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.720 [2024-07-25 19:10:47.417533] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:56.720 [2024-07-25 19:10:47.417631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid925779 ] 00:19:56.720 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.720 [2024-07-25 19:10:47.483303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.720 [2024-07-25 19:10:47.586758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dwGgM6HznL 00:19:56.720 [2024-07-25 19:10:47.930745] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:56.720 [2024-07-25 19:10:47.930873] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:56.720 [2024-07-25 19:10:47.940654] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:56.720 [2024-07-25 19:10:47.940791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x716f90 (107): Transport endpoint is not connected 00:19:56.720 [2024-07-25 19:10:47.941759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x716f90 (9): Bad file descriptor 00:19:56.720 [2024-07-25 19:10:47.942756] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:56.720 [2024-07-25 19:10:47.942777] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:56.720 [2024-07-25 19:10:47.942809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:56.720 request: 00:19:56.720 { 00:19:56.720 "name": "TLSTEST", 00:19:56.720 "trtype": "tcp", 00:19:56.720 "traddr": "10.0.0.2", 00:19:56.720 "adrfam": "ipv4", 00:19:56.720 "trsvcid": "4420", 00:19:56.720 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.720 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:56.720 "prchk_reftag": false, 00:19:56.720 "prchk_guard": false, 00:19:56.720 "hdgst": false, 00:19:56.720 "ddgst": false, 00:19:56.720 "psk": "/tmp/tmp.dwGgM6HznL", 00:19:56.720 "method": "bdev_nvme_attach_controller", 00:19:56.720 "req_id": 1 00:19:56.720 } 00:19:56.720 Got JSON-RPC error response 00:19:56.720 response: 00:19:56.720 { 00:19:56.720 "code": -5, 00:19:56.720 "message": "Input/output error" 00:19:56.720 } 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 925779 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 925779 ']' 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 925779 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 925779 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 925779' 00:19:56.720 killing process with pid 925779 00:19:56.720 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 925779 00:19:56.720 Received shutdown signal, test time was about 10.000000 seconds 00:19:56.720 00:19:56.721 Latency(us) 00:19:56.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.721 =================================================================================================================== 00:19:56.721 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:56.721 [2024-07-25 19:10:47.988977] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:56.721 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 925779 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NSXkD1BzuU 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NSXkD1BzuU 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NSXkD1BzuU 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NSXkD1BzuU' 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=925920 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 925920 /var/tmp/bdevperf.sock 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 925920 ']' 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:56.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.721 [2024-07-25 19:10:48.273693] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:56.721 [2024-07-25 19:10:48.273760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid925920 ] 00:19:56.721 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.721 [2024-07-25 19:10:48.338650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.721 [2024-07-25 19:10:48.444690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.NSXkD1BzuU 00:19:56.721 [2024-07-25 19:10:48.780695] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:56.721 [2024-07-25 19:10:48.780806] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:56.721 [2024-07-25 19:10:48.792738] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:56.721 [2024-07-25 19:10:48.792772] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:56.721 [2024-07-25 19:10:48.792825] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:56.721 [2024-07-25 19:10:48.793665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1312f90 (107): Transport endpoint is not connected 00:19:56.721 [2024-07-25 19:10:48.794657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1312f90 (9): Bad file descriptor 00:19:56.721 [2024-07-25 19:10:48.795655] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:56.721 [2024-07-25 19:10:48.795673] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:56.721 [2024-07-25 19:10:48.795705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:56.721 request: 00:19:56.721 { 00:19:56.721 "name": "TLSTEST", 00:19:56.721 "trtype": "tcp", 00:19:56.721 "traddr": "10.0.0.2", 00:19:56.721 "adrfam": "ipv4", 00:19:56.721 "trsvcid": "4420", 00:19:56.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.721 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:56.721 "prchk_reftag": false, 00:19:56.721 "prchk_guard": false, 00:19:56.721 "hdgst": false, 00:19:56.721 "ddgst": false, 00:19:56.721 "psk": "/tmp/tmp.NSXkD1BzuU", 00:19:56.721 "method": "bdev_nvme_attach_controller", 00:19:56.721 "req_id": 1 00:19:56.721 } 00:19:56.721 Got JSON-RPC error response 00:19:56.721 response: 00:19:56.721 { 00:19:56.721 "code": -5, 00:19:56.721 "message": "Input/output error" 00:19:56.721 } 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 925920 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 925920 ']' 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 925920 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 925920 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 925920' 00:19:56.721 killing process with pid 925920 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 925920 00:19:56.721 Received shutdown signal, test time was about 10.000000 seconds 00:19:56.721 00:19:56.721 Latency(us) 00:19:56.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.721 =================================================================================================================== 00:19:56.721 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:56.721 [2024-07-25 19:10:48.837630] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:56.721 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 925920 00:19:56.721 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:56.721 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:56.721 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:56.721 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:56.721 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:56.721 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NSXkD1BzuU 00:19:56.721 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:56.721 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NSXkD1BzuU 00:19:56.721 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:56.721 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:56.721 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:56.721 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:56.721 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NSXkD1BzuU 00:19:56.721 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:56.721 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:56.721 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:56.722 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NSXkD1BzuU' 00:19:56.722 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:56.722 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=925963 00:19:56.722 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:56.722 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:56.722 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 925963 /var/tmp/bdevperf.sock 00:19:56.722 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 925963 ']' 00:19:56.722 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:56.722 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:56.722 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:56.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:56.722 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:56.722 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.722 [2024-07-25 19:10:49.137536] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:56.722 [2024-07-25 19:10:49.137642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid925963 ] 00:19:56.722 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.979 [2024-07-25 19:10:49.206513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.979 [2024-07-25 19:10:49.314252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.979 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:56.979 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:56.979 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NSXkD1BzuU 00:19:57.236 [2024-07-25 19:10:49.656596] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:57.237 [2024-07-25 19:10:49.656719] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:57.237 [2024-07-25 19:10:49.664988] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:57.237 [2024-07-25 19:10:49.665036] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:57.237 [2024-07-25 19:10:49.665075] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:57.237 [2024-07-25 19:10:49.665716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2421f90 (107): Transport endpoint is not connected 00:19:57.237 [2024-07-25 19:10:49.666704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2421f90 (9): Bad file descriptor 00:19:57.237 [2024-07-25 19:10:49.667703] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:57.237 [2024-07-25 19:10:49.667723] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:57.237 [2024-07-25 19:10:49.667739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:57.237 request: 00:19:57.237 { 00:19:57.237 "name": "TLSTEST", 00:19:57.237 "trtype": "tcp", 00:19:57.237 "traddr": "10.0.0.2", 00:19:57.237 "adrfam": "ipv4", 00:19:57.237 "trsvcid": "4420", 00:19:57.237 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:57.237 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:57.237 "prchk_reftag": false, 00:19:57.237 "prchk_guard": false, 00:19:57.237 "hdgst": false, 00:19:57.237 "ddgst": false, 00:19:57.237 "psk": "/tmp/tmp.NSXkD1BzuU", 00:19:57.237 "method": "bdev_nvme_attach_controller", 00:19:57.237 "req_id": 1 00:19:57.237 } 00:19:57.237 Got JSON-RPC error response 00:19:57.237 response: 00:19:57.237 { 00:19:57.237 "code": -5, 00:19:57.237 "message": "Input/output error" 00:19:57.237 } 00:19:57.237 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 925963 00:19:57.237 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 925963 ']' 00:19:57.237 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 925963 00:19:57.237 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:57.237 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:57.237 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 925963 00:19:57.495 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:57.495 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:57.495 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 925963' 00:19:57.495 killing process with pid 925963 00:19:57.495 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 925963 00:19:57.495 Received shutdown signal, test time was about 10.000000 seconds 00:19:57.495 00:19:57.495 Latency(us) 00:19:57.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.495 =================================================================================================================== 00:19:57.495 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:57.495 [2024-07-25 19:10:49.718227] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:57.495 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 925963 00:19:57.753 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:57.753 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:57.753 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:57.754 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:57.754 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:57.754 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:57.754 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:57.754 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:57.754 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:57.754 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:57.754 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:57.754 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:57.754 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:57.754 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:57.754 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:57.754 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:57.754 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:57.754 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:57.754 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=926072 00:19:57.754 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:57.754 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:57.754 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 926072 /var/tmp/bdevperf.sock 00:19:57.754 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 926072 ']' 00:19:57.754 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.754 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:57.754 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.754 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:57.754 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.754 [2024-07-25 19:10:50.028815] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:57.754 [2024-07-25 19:10:50.028923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid926072 ] 00:19:57.754 EAL: No free 2048 kB hugepages reported on node 1 00:19:57.754 [2024-07-25 19:10:50.123704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.012 [2024-07-25 19:10:50.278013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.012 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:58.012 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:58.012 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:58.270 [2024-07-25 19:10:50.687832] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:58.270 [2024-07-25 19:10:50.689157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa5770 (9): Bad file descriptor 00:19:58.270 [2024-07-25 19:10:50.690151] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:58.270 [2024-07-25 19:10:50.690172] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:58.270 [2024-07-25 19:10:50.690188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:58.270 request: 00:19:58.270 { 00:19:58.270 "name": "TLSTEST", 00:19:58.270 "trtype": "tcp", 00:19:58.270 "traddr": "10.0.0.2", 00:19:58.270 "adrfam": "ipv4", 00:19:58.270 "trsvcid": "4420", 00:19:58.270 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.270 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:58.270 "prchk_reftag": false, 00:19:58.270 "prchk_guard": false, 00:19:58.270 "hdgst": false, 00:19:58.270 "ddgst": false, 00:19:58.270 "method": "bdev_nvme_attach_controller", 00:19:58.270 "req_id": 1 00:19:58.270 } 00:19:58.270 Got JSON-RPC error response 00:19:58.270 response: 00:19:58.270 { 00:19:58.270 "code": -5, 00:19:58.270 "message": "Input/output error" 00:19:58.270 } 00:19:58.270 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 926072 00:19:58.270 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 926072 ']' 00:19:58.270 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 926072 00:19:58.270 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:58.270 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:58.270 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 926072 00:19:58.270 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:58.270 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:58.528 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 926072' 00:19:58.528 killing process with pid 926072 00:19:58.528 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 926072 00:19:58.528 Received shutdown signal, test time was about 10.000000 seconds 00:19:58.528 00:19:58.528 Latency(us) 00:19:58.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.528 =================================================================================================================== 00:19:58.528 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:58.528 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 926072 00:19:58.786 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:58.786 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:58.786 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:58.786 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:58.786 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:58.786 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 922563 00:19:58.786 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 922563 ']' 00:19:58.786 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 922563 00:19:58.786 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:58.786 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:58.786 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 922563 00:19:58.786 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:58.786 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:58.786 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 922563' 00:19:58.786 killing process with pid 922563 00:19:58.786 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 922563 00:19:58.786 [2024-07-25 19:10:51.031061] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:58.786 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 922563 00:19:59.044 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:59.044 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:59.044 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:59.044 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:59.044 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:59.044 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:59.044 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:59.044 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:59.044 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:59.044 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.iquDBUIap0 00:19:59.044 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:59.044 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.iquDBUIap0 00:19:59.044 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:59.044 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:59.044 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:59.044 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.044 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=926276 00:19:59.044 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:59.044 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 926276 00:19:59.044 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 926276 ']' 00:19:59.044 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.044 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:59.044 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.044 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:59.044 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.044 [2024-07-25 19:10:51.422777] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:59.045 [2024-07-25 19:10:51.422870] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.045 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.045 [2024-07-25 19:10:51.496843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.302 [2024-07-25 19:10:51.602536] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.302 [2024-07-25 19:10:51.602589] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.302 [2024-07-25 19:10:51.602612] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.302 [2024-07-25 19:10:51.602622] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.303 [2024-07-25 19:10:51.602632] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.303 [2024-07-25 19:10:51.602657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.303 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:59.303 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:59.303 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:59.303 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:59.303 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.303 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.303 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.iquDBUIap0 00:19:59.303 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.iquDBUIap0 00:19:59.303 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:59.560 [2024-07-25 19:10:51.952937] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.560 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:59.818 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:00.076 [2024-07-25 19:10:52.454274] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:00.076 [2024-07-25 19:10:52.454503] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.076 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:00.334 malloc0 00:20:00.334 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:00.593 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iquDBUIap0 00:20:00.851 [2024-07-25 19:10:53.209070] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:00.851 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iquDBUIap0 00:20:00.851 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:00.851 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:00.851 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:00.851 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.iquDBUIap0' 00:20:00.851 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:00.851 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=926504 00:20:00.851 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:00.851 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:00.851 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 926504 /var/tmp/bdevperf.sock 00:20:00.851 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 926504 ']' 00:20:00.851 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.851 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:00.851 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.851 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:00.851 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.851 [2024-07-25 19:10:53.274198] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:00.851 [2024-07-25 19:10:53.274292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid926504 ] 00:20:00.851 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.117 [2024-07-25 19:10:53.344130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.117 [2024-07-25 19:10:53.453755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.117 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:01.117 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:01.117 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iquDBUIap0 00:20:01.374 [2024-07-25 19:10:53.805274] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:01.374 [2024-07-25 19:10:53.805405] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:01.632 TLSTESTn1 00:20:01.632 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:01.632 Running I/O for 10 seconds... 00:20:13.831 00:20:13.831 Latency(us) 00:20:13.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.832 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:13.832 Verification LBA range: start 0x0 length 0x2000 00:20:13.832 TLSTESTn1 : 10.06 2015.27 7.87 0.00 0.00 63321.58 5825.42 96313.65 00:20:13.832 =================================================================================================================== 00:20:13.832 Total : 2015.27 7.87 0.00 0.00 63321.58 5825.42 96313.65 00:20:13.832 0 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 926504 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 926504 ']' 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 926504 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 926504 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 926504' 00:20:13.832 killing process with pid 926504 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 926504 00:20:13.832 Received shutdown signal, test time was about 10.000000 seconds 00:20:13.832 00:20:13.832 Latency(us) 00:20:13.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.832 =================================================================================================================== 00:20:13.832 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:13.832 [2024-07-25 19:11:04.141962] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 926504 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.iquDBUIap0 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iquDBUIap0 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iquDBUIap0 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iquDBUIap0 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.iquDBUIap0' 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=927826 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 927826 /var/tmp/bdevperf.sock 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 927826 ']' 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:13.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.832 [2024-07-25 19:11:04.462788] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:13.832 [2024-07-25 19:11:04.462886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid927826 ] 00:20:13.832 EAL: No free 2048 kB hugepages reported on node 1 00:20:13.832 [2024-07-25 19:11:04.529436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.832 [2024-07-25 19:11:04.632293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:13.832 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iquDBUIap0 00:20:13.832 [2024-07-25 19:11:05.005032] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:13.832 [2024-07-25 19:11:05.005141] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:13.832 [2024-07-25 19:11:05.005160] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.iquDBUIap0 00:20:13.832 request: 00:20:13.832 { 00:20:13.832 "name": "TLSTEST", 00:20:13.832 "trtype": "tcp", 00:20:13.832 "traddr": "10.0.0.2", 00:20:13.832 "adrfam": "ipv4", 00:20:13.832 "trsvcid": "4420", 00:20:13.832 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.832 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:13.832 "prchk_reftag": false, 00:20:13.832 "prchk_guard": false, 00:20:13.832 "hdgst": false, 00:20:13.832 "ddgst": false, 00:20:13.832 "psk": "/tmp/tmp.iquDBUIap0", 00:20:13.832 "method": "bdev_nvme_attach_controller", 00:20:13.832 "req_id": 1 00:20:13.832 } 00:20:13.832 Got JSON-RPC error response 00:20:13.832 response: 00:20:13.832 { 00:20:13.832 "code": -1, 00:20:13.832 "message": "Operation not permitted" 00:20:13.832 } 00:20:13.832 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 927826 00:20:13.832 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 927826 ']' 00:20:13.832 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 927826 00:20:13.832 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:13.832 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:13.832 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 927826 00:20:13.832 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:13.832 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:13.832 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 927826' 00:20:13.832 killing process with pid 927826 00:20:13.832 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 927826 00:20:13.832 Received shutdown signal, test time was about 10.000000 seconds 00:20:13.832 00:20:13.832 Latency(us) 00:20:13.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.832 =================================================================================================================== 00:20:13.832 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:13.832 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 927826 00:20:13.832 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:13.832 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:13.832 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:13.832 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:13.832 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:13.832 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 926276 00:20:13.832 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 926276 ']' 00:20:13.832 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 926276 00:20:13.832 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:13.832 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:13.832 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 926276 00:20:13.832 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:13.832 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:13.832 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 926276' 00:20:13.833 killing process with pid 926276 00:20:13.833 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 926276 00:20:13.833 [2024-07-25 19:11:05.310836] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:13.833 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 926276 00:20:13.833 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:13.833 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:13.833 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:13.833 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.833 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=927971 00:20:13.833 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:13.833 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 927971 00:20:13.833 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 927971 ']' 00:20:13.833 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.833 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:13.833 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.833 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:13.833 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.833 [2024-07-25 19:11:05.656991] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:13.833 [2024-07-25 19:11:05.657111] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.833 EAL: No free 2048 kB hugepages reported on node 1 00:20:13.833 [2024-07-25 19:11:05.736349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.833 [2024-07-25 19:11:05.849242] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.833 [2024-07-25 19:11:05.849305] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.833 [2024-07-25 19:11:05.849332] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.833 [2024-07-25 19:11:05.849346] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.833 [2024-07-25 19:11:05.849357] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.833 [2024-07-25 19:11:05.849386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.423 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:14.423 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:14.423 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:14.423 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:14.423 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.423 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.423 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.iquDBUIap0 00:20:14.423 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:14.423 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.iquDBUIap0 00:20:14.423 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:20:14.423 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:14.423 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:20:14.423 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:14.423 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.iquDBUIap0 00:20:14.423 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.iquDBUIap0 00:20:14.423 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:14.682 [2024-07-25 19:11:06.918120] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.682 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:14.940 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:15.198 [2024-07-25 19:11:07.463540] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:15.198 [2024-07-25 19:11:07.463786] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.198 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:15.456 malloc0 00:20:15.456 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:15.714 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iquDBUIap0 00:20:15.973 [2024-07-25 19:11:08.301662] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:15.973 [2024-07-25 19:11:08.301706] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:15.973 [2024-07-25 19:11:08.301744] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:15.973 request: 00:20:15.973 { 00:20:15.973 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.973 "host": "nqn.2016-06.io.spdk:host1", 00:20:15.973 "psk": "/tmp/tmp.iquDBUIap0", 00:20:15.973 "method": "nvmf_subsystem_add_host", 00:20:15.973 "req_id": 1 00:20:15.973 } 00:20:15.973 Got JSON-RPC error response 00:20:15.973 response: 00:20:15.973 { 00:20:15.973 "code": -32603, 00:20:15.973 "message": "Internal error" 00:20:15.973 } 00:20:15.973 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:15.973 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:15.973 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:15.973 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:15.973 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 927971 00:20:15.973 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 927971 ']' 00:20:15.973 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 927971 00:20:15.973 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:15.973 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:15.973 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 927971 00:20:15.973 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:15.973 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:15.973 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 927971' 00:20:15.973 killing process with pid 927971 00:20:15.973 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 927971 00:20:15.973 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 927971 00:20:16.231 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.iquDBUIap0 00:20:16.231 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:16.231 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:16.231 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:16.231 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.231 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=928397 00:20:16.231 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:16.231 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 928397 00:20:16.231 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 928397 ']' 00:20:16.231 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.231 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:16.231 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.231 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:16.231 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.489 [2024-07-25 19:11:08.707914] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:16.489 [2024-07-25 19:11:08.708013] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.489 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.489 [2024-07-25 19:11:08.787894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.489 [2024-07-25 19:11:08.904727] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.489 [2024-07-25 19:11:08.904789] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.489 [2024-07-25 19:11:08.904806] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.489 [2024-07-25 19:11:08.904819] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.489 [2024-07-25 19:11:08.904830] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.489 [2024-07-25 19:11:08.904861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.442 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:17.442 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:17.442 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:17.442 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:17.442 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.442 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.442 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.iquDBUIap0 00:20:17.442 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.iquDBUIap0 00:20:17.442 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:17.716 [2024-07-25 19:11:09.918621] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.716 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:17.974 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:18.233 [2024-07-25 19:11:10.448021] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:18.233 [2024-07-25 19:11:10.448275] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.233 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:18.491 malloc0 00:20:18.491 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:18.748 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iquDBUIap0 00:20:18.748 [2024-07-25 19:11:11.189741] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:18.748 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=928684 00:20:18.748 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:18.748 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:18.748 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 928684 /var/tmp/bdevperf.sock 00:20:18.748 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 928684 ']' 00:20:18.748 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.748 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:18.748 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.748 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:18.748 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.006 [2024-07-25 19:11:11.252177] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:19.006 [2024-07-25 19:11:11.252267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid928684 ] 00:20:19.006 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.006 [2024-07-25 19:11:11.317967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.006 [2024-07-25 19:11:11.423214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.264 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:19.264 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:19.264 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iquDBUIap0 00:20:19.522 [2024-07-25 19:11:11.783724] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.522 [2024-07-25 19:11:11.783844] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:19.522 TLSTESTn1 00:20:19.522 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:19.781 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:20:19.781 "subsystems": [ 00:20:19.781 { 00:20:19.781 "subsystem": "keyring", 00:20:19.781 "config": [] 00:20:19.781 }, 00:20:19.781 { 00:20:19.781 "subsystem": "iobuf", 00:20:19.781 "config": [ 00:20:19.781 { 00:20:19.781 "method": "iobuf_set_options", 00:20:19.781 "params": { 00:20:19.781 "small_pool_count": 8192, 00:20:19.781 "large_pool_count": 1024, 00:20:19.781 "small_bufsize": 8192, 00:20:19.781 "large_bufsize": 135168 00:20:19.781 } 00:20:19.781 } 00:20:19.781 ] 00:20:19.781 }, 00:20:19.781 { 00:20:19.781 "subsystem": "sock", 00:20:19.781 "config": [ 00:20:19.781 { 00:20:19.781 "method": "sock_set_default_impl", 00:20:19.781 "params": { 00:20:19.781 "impl_name": "posix" 00:20:19.781 } 00:20:19.781 }, 00:20:19.781 { 00:20:19.781 "method": "sock_impl_set_options", 00:20:19.781 "params": { 00:20:19.781 "impl_name": "ssl", 00:20:19.781 "recv_buf_size": 4096, 00:20:19.781 "send_buf_size": 4096, 00:20:19.781 "enable_recv_pipe": true, 00:20:19.781 "enable_quickack": false, 00:20:19.781 "enable_placement_id": 0, 00:20:19.781 "enable_zerocopy_send_server": true, 00:20:19.781 "enable_zerocopy_send_client": false, 00:20:19.781 "zerocopy_threshold": 0, 00:20:19.781 "tls_version": 0, 00:20:19.781 "enable_ktls": false 00:20:19.781 } 00:20:19.781 }, 00:20:19.781 { 00:20:19.781 "method": "sock_impl_set_options", 00:20:19.781 "params": { 00:20:19.781 "impl_name": "posix", 00:20:19.781 "recv_buf_size": 2097152, 00:20:19.781 "send_buf_size": 2097152, 00:20:19.781 "enable_recv_pipe": true, 00:20:19.781 "enable_quickack": false, 00:20:19.781 "enable_placement_id": 0, 00:20:19.781 "enable_zerocopy_send_server": true, 00:20:19.781 "enable_zerocopy_send_client": false, 00:20:19.781 "zerocopy_threshold": 0, 00:20:19.781 "tls_version": 0, 00:20:19.781 "enable_ktls": false 00:20:19.781 } 00:20:19.781 } 00:20:19.781 ] 00:20:19.781 }, 00:20:19.781 { 00:20:19.781 "subsystem": "vmd", 00:20:19.781 "config": [] 00:20:19.781 }, 00:20:19.781 { 00:20:19.781 "subsystem": "accel", 00:20:19.781 "config": [ 00:20:19.781 { 00:20:19.781 "method": "accel_set_options", 00:20:19.781 "params": { 00:20:19.781 "small_cache_size": 128, 00:20:19.781 "large_cache_size": 16, 00:20:19.781 "task_count": 2048, 00:20:19.781 "sequence_count": 2048, 00:20:19.781 "buf_count": 2048 00:20:19.781 } 00:20:19.781 } 00:20:19.781 ] 00:20:19.781 }, 00:20:19.781 { 00:20:19.781 "subsystem": "bdev", 00:20:19.781 "config": [ 00:20:19.781 { 00:20:19.781 "method": "bdev_set_options", 00:20:19.781 "params": { 00:20:19.781 "bdev_io_pool_size": 65535, 00:20:19.781 "bdev_io_cache_size": 256, 00:20:19.781 "bdev_auto_examine": true, 00:20:19.781 "iobuf_small_cache_size": 128, 00:20:19.781 "iobuf_large_cache_size": 16 00:20:19.781 } 00:20:19.781 }, 00:20:19.781 { 00:20:19.781 "method": "bdev_raid_set_options", 00:20:19.781 "params": { 00:20:19.781 "process_window_size_kb": 1024, 00:20:19.781 "process_max_bandwidth_mb_sec": 0 00:20:19.781 } 00:20:19.781 }, 00:20:19.781 { 00:20:19.781 "method": "bdev_iscsi_set_options", 00:20:19.781 "params": { 00:20:19.781 "timeout_sec": 30 00:20:19.781 } 00:20:19.781 }, 00:20:19.781 { 00:20:19.781 "method": "bdev_nvme_set_options", 00:20:19.781 "params": { 00:20:19.781 "action_on_timeout": "none", 00:20:19.781 "timeout_us": 0, 00:20:19.781 "timeout_admin_us": 0, 00:20:19.781 "keep_alive_timeout_ms": 10000, 00:20:19.781 "arbitration_burst": 0, 00:20:19.781 "low_priority_weight": 0, 00:20:19.781 "medium_priority_weight": 0, 00:20:19.781 "high_priority_weight": 0, 00:20:19.781 "nvme_adminq_poll_period_us": 10000, 00:20:19.781 "nvme_ioq_poll_period_us": 0, 00:20:19.781 "io_queue_requests": 0, 00:20:19.781 "delay_cmd_submit": true, 00:20:19.781 "transport_retry_count": 4, 00:20:19.781 "bdev_retry_count": 3, 00:20:19.781 "transport_ack_timeout": 0, 00:20:19.781 "ctrlr_loss_timeout_sec": 0, 00:20:19.781 "reconnect_delay_sec": 0, 00:20:19.781 "fast_io_fail_timeout_sec": 0, 00:20:19.781 "disable_auto_failback": false, 00:20:19.781 "generate_uuids": false, 00:20:19.781 "transport_tos": 0, 00:20:19.781 "nvme_error_stat": false, 00:20:19.781 "rdma_srq_size": 0, 00:20:19.781 "io_path_stat": false, 00:20:19.781 "allow_accel_sequence": false, 00:20:19.781 "rdma_max_cq_size": 0, 00:20:19.781 "rdma_cm_event_timeout_ms": 0, 00:20:19.781 "dhchap_digests": [ 00:20:19.781 "sha256", 00:20:19.782 "sha384", 00:20:19.782 "sha512" 00:20:19.782 ], 00:20:19.782 "dhchap_dhgroups": [ 00:20:19.782 "null", 00:20:19.782 "ffdhe2048", 00:20:19.782 "ffdhe3072", 00:20:19.782 "ffdhe4096", 00:20:19.782 "ffdhe6144", 00:20:19.782 "ffdhe8192" 00:20:19.782 ] 00:20:19.782 } 00:20:19.782 }, 00:20:19.782 { 00:20:19.782 "method": "bdev_nvme_set_hotplug", 00:20:19.782 "params": { 00:20:19.782 "period_us": 100000, 00:20:19.782 "enable": false 00:20:19.782 } 00:20:19.782 }, 00:20:19.782 { 00:20:19.782 "method": "bdev_malloc_create", 00:20:19.782 "params": { 00:20:19.782 "name": "malloc0", 00:20:19.782 "num_blocks": 8192, 00:20:19.782 "block_size": 4096, 00:20:19.782 "physical_block_size": 4096, 00:20:19.782 "uuid": "22477beb-b7ef-4a76-81de-bc6eaf199a12", 00:20:19.782 "optimal_io_boundary": 0, 00:20:19.782 "md_size": 0, 00:20:19.782 "dif_type": 0, 00:20:19.782 "dif_is_head_of_md": false, 00:20:19.782 "dif_pi_format": 0 00:20:19.782 } 00:20:19.782 }, 00:20:19.782 { 00:20:19.782 "method": "bdev_wait_for_examine" 00:20:19.782 } 00:20:19.782 ] 00:20:19.782 }, 00:20:19.782 { 00:20:19.782 "subsystem": "nbd", 00:20:19.782 "config": [] 00:20:19.782 }, 00:20:19.782 { 00:20:19.782 "subsystem": "scheduler", 00:20:19.782 "config": [ 00:20:19.782 { 00:20:19.782 "method": "framework_set_scheduler", 00:20:19.782 "params": { 00:20:19.782 "name": "static" 00:20:19.782 } 00:20:19.782 } 00:20:19.782 ] 00:20:19.782 }, 00:20:19.782 { 00:20:19.782 "subsystem": "nvmf", 00:20:19.782 "config": [ 00:20:19.782 { 00:20:19.782 "method": "nvmf_set_config", 00:20:19.782 "params": { 00:20:19.782 "discovery_filter": "match_any", 00:20:19.782 "admin_cmd_passthru": { 00:20:19.782 "identify_ctrlr": false 00:20:19.782 } 00:20:19.782 } 00:20:19.782 }, 00:20:19.782 { 00:20:19.782 "method": "nvmf_set_max_subsystems", 00:20:19.782 "params": { 00:20:19.782 "max_subsystems": 1024 00:20:19.782 } 00:20:19.782 }, 00:20:19.782 { 00:20:19.782 "method": "nvmf_set_crdt", 00:20:19.782 "params": { 00:20:19.782 "crdt1": 0, 00:20:19.782 "crdt2": 0, 00:20:19.782 "crdt3": 0 00:20:19.782 } 00:20:19.782 }, 00:20:19.782 { 00:20:19.782 "method": "nvmf_create_transport", 00:20:19.782 "params": { 00:20:19.782 "trtype": "TCP", 00:20:19.782 "max_queue_depth": 128, 00:20:19.782 "max_io_qpairs_per_ctrlr": 127, 00:20:19.782 "in_capsule_data_size": 4096, 00:20:19.782 "max_io_size": 131072, 00:20:19.782 "io_unit_size": 131072, 00:20:19.782 "max_aq_depth": 128, 00:20:19.782 "num_shared_buffers": 511, 00:20:19.782 "buf_cache_size": 4294967295, 00:20:19.782 "dif_insert_or_strip": false, 00:20:19.782 "zcopy": false, 00:20:19.782 "c2h_success": false, 00:20:19.782 "sock_priority": 0, 00:20:19.782 "abort_timeout_sec": 1, 00:20:19.782 "ack_timeout": 0, 00:20:19.782 "data_wr_pool_size": 0 00:20:19.782 } 00:20:19.782 }, 00:20:19.782 { 00:20:19.782 "method": "nvmf_create_subsystem", 00:20:19.782 "params": { 00:20:19.782 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.782 "allow_any_host": false, 00:20:19.782 "serial_number": "SPDK00000000000001", 00:20:19.782 "model_number": "SPDK bdev Controller", 00:20:19.782 "max_namespaces": 10, 00:20:19.782 "min_cntlid": 1, 00:20:19.782 "max_cntlid": 65519, 00:20:19.782 "ana_reporting": false 00:20:19.782 } 00:20:19.782 }, 00:20:19.782 { 00:20:19.782 "method": "nvmf_subsystem_add_host", 00:20:19.782 "params": { 00:20:19.782 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.782 "host": "nqn.2016-06.io.spdk:host1", 00:20:19.782 "psk": "/tmp/tmp.iquDBUIap0" 00:20:19.782 } 00:20:19.782 }, 00:20:19.782 { 00:20:19.782 "method": "nvmf_subsystem_add_ns", 00:20:19.782 "params": { 00:20:19.782 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.782 "namespace": { 00:20:19.782 "nsid": 1, 00:20:19.782 "bdev_name": "malloc0", 00:20:19.782 "nguid": "22477BEBB7EF4A7681DEBC6EAF199A12", 00:20:19.782 "uuid": "22477beb-b7ef-4a76-81de-bc6eaf199a12", 00:20:19.782 "no_auto_visible": false 00:20:19.782 } 00:20:19.782 } 00:20:19.782 }, 00:20:19.782 { 00:20:19.782 "method": "nvmf_subsystem_add_listener", 00:20:19.782 "params": { 00:20:19.782 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.782 "listen_address": { 00:20:19.782 "trtype": "TCP", 00:20:19.782 "adrfam": "IPv4", 00:20:19.782 "traddr": "10.0.0.2", 00:20:19.782 "trsvcid": "4420" 00:20:19.782 }, 00:20:19.782 "secure_channel": true 00:20:19.782 } 00:20:19.782 } 00:20:19.782 ] 00:20:19.782 } 00:20:19.782 ] 00:20:19.782 }' 00:20:19.782 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:20.349 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:20.349 "subsystems": [ 00:20:20.349 { 00:20:20.349 "subsystem": "keyring", 00:20:20.349 "config": [] 00:20:20.349 }, 00:20:20.349 { 00:20:20.349 "subsystem": "iobuf", 00:20:20.349 "config": [ 00:20:20.349 { 00:20:20.349 "method": "iobuf_set_options", 00:20:20.349 "params": { 00:20:20.349 "small_pool_count": 8192, 00:20:20.349 "large_pool_count": 1024, 00:20:20.349 "small_bufsize": 8192, 00:20:20.349 "large_bufsize": 135168 00:20:20.349 } 00:20:20.349 } 00:20:20.349 ] 00:20:20.349 }, 00:20:20.349 { 00:20:20.349 "subsystem": "sock", 00:20:20.349 "config": [ 00:20:20.349 { 00:20:20.349 "method": "sock_set_default_impl", 00:20:20.349 "params": { 00:20:20.349 "impl_name": "posix" 00:20:20.349 } 00:20:20.349 }, 00:20:20.349 { 00:20:20.349 "method": "sock_impl_set_options", 00:20:20.349 "params": { 00:20:20.349 "impl_name": "ssl", 00:20:20.349 "recv_buf_size": 4096, 00:20:20.349 "send_buf_size": 4096, 00:20:20.349 "enable_recv_pipe": true, 00:20:20.349 "enable_quickack": false, 00:20:20.349 "enable_placement_id": 0, 00:20:20.349 "enable_zerocopy_send_server": true, 00:20:20.349 "enable_zerocopy_send_client": false, 00:20:20.349 "zerocopy_threshold": 0, 00:20:20.349 "tls_version": 0, 00:20:20.349 "enable_ktls": false 00:20:20.349 } 00:20:20.349 }, 00:20:20.349 { 00:20:20.349 "method": "sock_impl_set_options", 00:20:20.349 "params": { 00:20:20.349 "impl_name": "posix", 00:20:20.349 "recv_buf_size": 2097152, 00:20:20.349 "send_buf_size": 2097152, 00:20:20.349 "enable_recv_pipe": true, 00:20:20.349 "enable_quickack": false, 00:20:20.349 "enable_placement_id": 0, 00:20:20.349 "enable_zerocopy_send_server": true, 00:20:20.349 "enable_zerocopy_send_client": false, 00:20:20.349 "zerocopy_threshold": 0, 00:20:20.349 "tls_version": 0, 00:20:20.349 "enable_ktls": false 00:20:20.349 } 00:20:20.349 } 00:20:20.349 ] 00:20:20.349 }, 00:20:20.349 { 00:20:20.349 "subsystem": "vmd", 00:20:20.349 "config": [] 00:20:20.349 }, 00:20:20.349 { 00:20:20.349 "subsystem": "accel", 00:20:20.349 "config": [ 00:20:20.349 { 00:20:20.349 "method": "accel_set_options", 00:20:20.349 "params": { 00:20:20.349 "small_cache_size": 128, 00:20:20.349 "large_cache_size": 16, 00:20:20.349 "task_count": 2048, 00:20:20.349 "sequence_count": 2048, 00:20:20.349 "buf_count": 2048 00:20:20.349 } 00:20:20.349 } 00:20:20.349 ] 00:20:20.349 }, 00:20:20.349 { 00:20:20.349 "subsystem": "bdev", 00:20:20.349 "config": [ 00:20:20.349 { 00:20:20.349 "method": "bdev_set_options", 00:20:20.349 "params": { 00:20:20.349 "bdev_io_pool_size": 65535, 00:20:20.349 "bdev_io_cache_size": 256, 00:20:20.349 "bdev_auto_examine": true, 00:20:20.349 "iobuf_small_cache_size": 128, 00:20:20.349 "iobuf_large_cache_size": 16 00:20:20.349 } 00:20:20.349 }, 00:20:20.349 { 00:20:20.349 "method": "bdev_raid_set_options", 00:20:20.349 "params": { 00:20:20.349 "process_window_size_kb": 1024, 00:20:20.349 "process_max_bandwidth_mb_sec": 0 00:20:20.349 } 00:20:20.349 }, 00:20:20.349 { 00:20:20.349 "method": "bdev_iscsi_set_options", 00:20:20.349 "params": { 00:20:20.349 "timeout_sec": 30 00:20:20.349 } 00:20:20.349 }, 00:20:20.349 { 00:20:20.349 "method": "bdev_nvme_set_options", 00:20:20.349 "params": { 00:20:20.349 "action_on_timeout": "none", 00:20:20.349 "timeout_us": 0, 00:20:20.349 "timeout_admin_us": 0, 00:20:20.349 "keep_alive_timeout_ms": 10000, 00:20:20.349 "arbitration_burst": 0, 00:20:20.349 "low_priority_weight": 0, 00:20:20.349 "medium_priority_weight": 0, 00:20:20.349 "high_priority_weight": 0, 00:20:20.349 "nvme_adminq_poll_period_us": 10000, 00:20:20.349 "nvme_ioq_poll_period_us": 0, 00:20:20.349 "io_queue_requests": 512, 00:20:20.349 "delay_cmd_submit": true, 00:20:20.349 "transport_retry_count": 4, 00:20:20.349 "bdev_retry_count": 3, 00:20:20.349 "transport_ack_timeout": 0, 00:20:20.349 "ctrlr_loss_timeout_sec": 0, 00:20:20.349 "reconnect_delay_sec": 0, 00:20:20.349 "fast_io_fail_timeout_sec": 0, 00:20:20.350 "disable_auto_failback": false, 00:20:20.350 "generate_uuids": false, 00:20:20.350 "transport_tos": 0, 00:20:20.350 "nvme_error_stat": false, 00:20:20.350 "rdma_srq_size": 0, 00:20:20.350 "io_path_stat": false, 00:20:20.350 "allow_accel_sequence": false, 00:20:20.350 "rdma_max_cq_size": 0, 00:20:20.350 "rdma_cm_event_timeout_ms": 0, 00:20:20.350 "dhchap_digests": [ 00:20:20.350 "sha256", 00:20:20.350 "sha384", 00:20:20.350 "sha512" 00:20:20.350 ], 00:20:20.350 "dhchap_dhgroups": [ 00:20:20.350 "null", 00:20:20.350 "ffdhe2048", 00:20:20.350 "ffdhe3072", 00:20:20.350 "ffdhe4096", 00:20:20.350 "ffdhe6144", 00:20:20.350 "ffdhe8192" 00:20:20.350 ] 00:20:20.350 } 00:20:20.350 }, 00:20:20.350 { 00:20:20.350 "method": "bdev_nvme_attach_controller", 00:20:20.350 "params": { 00:20:20.350 "name": "TLSTEST", 00:20:20.350 "trtype": "TCP", 00:20:20.350 "adrfam": "IPv4", 00:20:20.350 "traddr": "10.0.0.2", 00:20:20.350 "trsvcid": "4420", 00:20:20.350 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.350 "prchk_reftag": false, 00:20:20.350 "prchk_guard": false, 00:20:20.350 "ctrlr_loss_timeout_sec": 0, 00:20:20.350 "reconnect_delay_sec": 0, 00:20:20.350 "fast_io_fail_timeout_sec": 0, 00:20:20.350 "psk": "/tmp/tmp.iquDBUIap0", 00:20:20.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.350 "hdgst": false, 00:20:20.350 "ddgst": false 00:20:20.350 } 00:20:20.350 }, 00:20:20.350 { 00:20:20.350 "method": "bdev_nvme_set_hotplug", 00:20:20.350 "params": { 00:20:20.350 "period_us": 100000, 00:20:20.350 "enable": false 00:20:20.350 } 00:20:20.350 }, 00:20:20.350 { 00:20:20.350 "method": "bdev_wait_for_examine" 00:20:20.350 } 00:20:20.350 ] 00:20:20.350 }, 00:20:20.350 { 00:20:20.350 "subsystem": "nbd", 00:20:20.350 "config": [] 00:20:20.350 } 00:20:20.350 ] 00:20:20.350 }' 00:20:20.350 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 928684 00:20:20.350 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 928684 ']' 00:20:20.350 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 928684 00:20:20.350 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:20.350 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:20.350 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 928684 00:20:20.350 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:20.350 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:20.350 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 928684' 00:20:20.350 killing process with pid 928684 00:20:20.350 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 928684 00:20:20.350 Received shutdown signal, test time was about 10.000000 seconds 00:20:20.350 00:20:20.350 Latency(us) 00:20:20.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.350 =================================================================================================================== 00:20:20.350 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:20.350 [2024-07-25 19:11:12.547291] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:20.350 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 928684 00:20:20.350 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 928397 00:20:20.350 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 928397 ']' 00:20:20.350 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 928397 00:20:20.350 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:20.350 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:20.350 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 928397 00:20:20.609 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:20.609 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:20.609 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 928397' 00:20:20.609 killing process with pid 928397 00:20:20.609 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 928397 00:20:20.609 [2024-07-25 19:11:12.842766] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:20.609 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 928397 00:20:20.868 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:20.868 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:20.868 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:20.868 "subsystems": [ 00:20:20.868 { 00:20:20.868 "subsystem": "keyring", 00:20:20.868 "config": [] 00:20:20.868 }, 00:20:20.868 { 00:20:20.868 "subsystem": "iobuf", 00:20:20.868 "config": [ 00:20:20.868 { 00:20:20.868 "method": "iobuf_set_options", 00:20:20.868 "params": { 00:20:20.868 "small_pool_count": 8192, 00:20:20.868 "large_pool_count": 1024, 00:20:20.868 "small_bufsize": 8192, 00:20:20.868 "large_bufsize": 135168 00:20:20.868 } 00:20:20.868 } 00:20:20.868 ] 00:20:20.868 }, 00:20:20.868 { 00:20:20.868 "subsystem": "sock", 00:20:20.868 "config": [ 00:20:20.868 { 00:20:20.868 "method": "sock_set_default_impl", 00:20:20.868 "params": { 00:20:20.868 "impl_name": "posix" 00:20:20.868 } 00:20:20.868 }, 00:20:20.868 { 00:20:20.868 "method": "sock_impl_set_options", 00:20:20.868 "params": { 00:20:20.868 "impl_name": "ssl", 00:20:20.868 "recv_buf_size": 4096, 00:20:20.868 "send_buf_size": 4096, 00:20:20.868 "enable_recv_pipe": true, 00:20:20.868 "enable_quickack": false, 00:20:20.868 "enable_placement_id": 0, 00:20:20.868 "enable_zerocopy_send_server": true, 00:20:20.868 "enable_zerocopy_send_client": false, 00:20:20.868 "zerocopy_threshold": 0, 00:20:20.868 "tls_version": 0, 00:20:20.868 "enable_ktls": false 00:20:20.868 } 00:20:20.868 }, 00:20:20.868 { 00:20:20.868 "method": "sock_impl_set_options", 00:20:20.868 "params": { 00:20:20.868 "impl_name": "posix", 00:20:20.868 "recv_buf_size": 2097152, 00:20:20.868 "send_buf_size": 2097152, 00:20:20.868 "enable_recv_pipe": true, 00:20:20.868 "enable_quickack": false, 00:20:20.868 "enable_placement_id": 0, 00:20:20.868 "enable_zerocopy_send_server": true, 00:20:20.868 "enable_zerocopy_send_client": false, 00:20:20.868 "zerocopy_threshold": 0, 00:20:20.868 "tls_version": 0, 00:20:20.868 "enable_ktls": false 00:20:20.868 } 00:20:20.868 } 00:20:20.868 ] 00:20:20.868 }, 00:20:20.868 { 00:20:20.868 "subsystem": "vmd", 00:20:20.868 "config": [] 00:20:20.868 }, 00:20:20.868 { 00:20:20.868 "subsystem": "accel", 00:20:20.868 "config": [ 00:20:20.868 { 00:20:20.868 "method": "accel_set_options", 00:20:20.868 "params": { 00:20:20.868 "small_cache_size": 128, 00:20:20.868 "large_cache_size": 16, 00:20:20.868 "task_count": 2048, 00:20:20.868 "sequence_count": 2048, 00:20:20.868 "buf_count": 2048 00:20:20.868 } 00:20:20.868 } 00:20:20.868 ] 00:20:20.868 }, 00:20:20.868 { 00:20:20.868 "subsystem": "bdev", 00:20:20.868 "config": [ 00:20:20.868 { 00:20:20.868 "method": "bdev_set_options", 00:20:20.868 "params": { 00:20:20.868 "bdev_io_pool_size": 65535, 00:20:20.868 "bdev_io_cache_size": 256, 00:20:20.868 "bdev_auto_examine": true, 00:20:20.868 "iobuf_small_cache_size": 128, 00:20:20.868 "iobuf_large_cache_size": 16 00:20:20.868 } 00:20:20.868 }, 00:20:20.868 { 00:20:20.868 "method": "bdev_raid_set_options", 00:20:20.868 "params": { 00:20:20.868 "process_window_size_kb": 1024, 00:20:20.868 "process_max_bandwidth_mb_sec": 0 00:20:20.868 } 00:20:20.868 }, 00:20:20.868 { 00:20:20.868 "method": "bdev_iscsi_set_options", 00:20:20.868 "params": { 00:20:20.868 "timeout_sec": 30 00:20:20.868 } 00:20:20.868 }, 00:20:20.868 { 00:20:20.868 "method": "bdev_nvme_set_options", 00:20:20.868 "params": { 00:20:20.868 "action_on_timeout": "none", 00:20:20.868 "timeout_us": 0, 00:20:20.868 "timeout_admin_us": 0, 00:20:20.868 "keep_alive_timeout_ms": 10000, 00:20:20.868 "arbitration_burst": 0, 00:20:20.869 "low_priority_weight": 0, 00:20:20.869 "medium_priority_weight": 0, 00:20:20.869 "high_priority_weight": 0, 00:20:20.869 "nvme_adminq_poll_period_us": 10000, 00:20:20.869 "nvme_ioq_poll_period_us": 0, 00:20:20.869 "io_queue_requests": 0, 00:20:20.869 "delay_cmd_submit": true, 00:20:20.869 "transport_retry_count": 4, 00:20:20.869 "bdev_retry_count": 3, 00:20:20.869 "transport_ack_timeout": 0, 00:20:20.869 "ctrlr_loss_timeout_sec": 0, 00:20:20.869 "reconnect_delay_sec": 0, 00:20:20.869 "fast_io_fail_timeout_sec": 0, 00:20:20.869 "disable_auto_failback": false, 00:20:20.869 "generate_uuids": false, 00:20:20.869 "transport_tos": 0, 00:20:20.869 "nvme_error_stat": false, 00:20:20.869 "rdma_srq_size": 0, 00:20:20.869 "io_path_stat": false, 00:20:20.869 "allow_accel_sequence": false, 00:20:20.869 "rdma_max_cq_size": 0, 00:20:20.869 "rdma_cm_event_timeout_ms": 0, 00:20:20.869 "dhchap_digests": [ 00:20:20.869 "sha256", 00:20:20.869 "sha384", 00:20:20.869 "sha512" 00:20:20.869 ], 00:20:20.869 "dhchap_dhgroups": [ 00:20:20.869 "null", 00:20:20.869 "ffdhe2048", 00:20:20.869 "ffdhe3072", 00:20:20.869 "ffdhe4096", 00:20:20.869 "ffdhe6144", 00:20:20.869 "ffdhe8192" 00:20:20.869 ] 00:20:20.869 } 00:20:20.869 }, 00:20:20.869 { 00:20:20.869 "method": "bdev_nvme_set_hotplug", 00:20:20.869 "params": { 00:20:20.869 "period_us": 100000, 00:20:20.869 "enable": false 00:20:20.869 } 00:20:20.869 }, 00:20:20.869 { 00:20:20.869 "method": "bdev_malloc_create", 00:20:20.869 "params": { 00:20:20.869 "name": "malloc0", 00:20:20.869 "num_blocks": 8192, 00:20:20.869 "block_size": 4096, 00:20:20.869 "physical_block_size": 4096, 00:20:20.869 "uuid": "22477beb-b7ef-4a76-81de-bc6eaf199a12", 00:20:20.869 "optimal_io_boundary": 0, 00:20:20.869 "md_size": 0, 00:20:20.869 "dif_type": 0, 00:20:20.869 "dif_is_head_of_md": false, 00:20:20.869 "dif_pi_format": 0 00:20:20.869 } 00:20:20.869 }, 00:20:20.869 { 00:20:20.869 "method": "bdev_wait_for_examine" 00:20:20.869 } 00:20:20.869 ] 00:20:20.869 }, 00:20:20.869 { 00:20:20.869 "subsystem": "nbd", 00:20:20.869 "config": [] 00:20:20.869 }, 00:20:20.869 { 00:20:20.869 "subsystem": "scheduler", 00:20:20.869 "config": [ 00:20:20.869 { 00:20:20.869 "method": "framework_set_scheduler", 00:20:20.869 "params": { 00:20:20.869 "name": "static" 00:20:20.869 } 00:20:20.869 } 00:20:20.869 ] 00:20:20.869 }, 00:20:20.869 { 00:20:20.869 "subsystem": "nvmf", 00:20:20.869 "config": [ 00:20:20.869 { 00:20:20.869 "method": "nvmf_set_config", 00:20:20.869 "params": { 00:20:20.869 "discovery_filter": "match_any", 00:20:20.869 "admin_cmd_passthru": { 00:20:20.869 "identify_ctrlr": false 00:20:20.869 } 00:20:20.869 } 00:20:20.869 }, 00:20:20.869 { 00:20:20.869 "method": "nvmf_set_max_subsystems", 00:20:20.869 "params": { 00:20:20.869 "max_subsystems": 1024 00:20:20.869 } 00:20:20.869 }, 00:20:20.869 { 00:20:20.869 "method": "nvmf_set_crdt", 00:20:20.869 "params": { 00:20:20.869 "crdt1": 0, 00:20:20.869 "crdt2": 0, 00:20:20.869 "crdt3": 0 00:20:20.869 } 00:20:20.869 }, 00:20:20.869 { 00:20:20.869 "method": "nvmf_create_transport", 00:20:20.869 "params": { 00:20:20.869 "trtype": "TCP", 00:20:20.869 "max_queue_depth": 128, 00:20:20.869 "max_io_qpairs_per_ctrlr": 127, 00:20:20.869 "in_capsule_data_size": 4096, 00:20:20.869 "max_io_size": 131072, 00:20:20.869 "io_unit_size": 131072, 00:20:20.869 "max_aq_depth": 128, 00:20:20.869 "num_shared_buffers": 511, 00:20:20.869 "buf_cache_size": 4294967295, 00:20:20.869 "dif_insert_or_strip": false, 00:20:20.869 "zcopy": false, 00:20:20.869 "c2h_success": false, 00:20:20.869 "sock_priority": 0, 00:20:20.869 "abort_timeout_sec": 1, 00:20:20.869 "ack_timeout": 0, 00:20:20.869 "data_wr_pool_size": 0 00:20:20.869 } 00:20:20.869 }, 00:20:20.869 { 00:20:20.869 "method": "nvmf_create_subsystem", 00:20:20.869 "params": { 00:20:20.869 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.869 "allow_any_host": false, 00:20:20.869 "serial_number": "SPDK00000000000001", 00:20:20.869 "model_number": "SPDK bdev Controller", 00:20:20.869 "max_namespaces": 10, 00:20:20.869 "min_cntlid": 1, 00:20:20.869 "max_cntlid": 65519, 00:20:20.869 "ana_reporting": false 00:20:20.869 } 00:20:20.869 }, 00:20:20.869 { 00:20:20.869 "method": "nvmf_subsystem_add_host", 00:20:20.869 "params": { 00:20:20.869 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.869 "host": "nqn.2016-06.io.spdk:host1", 00:20:20.869 "psk": "/tmp/tmp.iquDBUIap0" 00:20:20.869 } 00:20:20.869 }, 00:20:20.869 { 00:20:20.869 "method": "nvmf_subsystem_add_ns", 00:20:20.869 "params": { 00:20:20.869 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.869 "namespace": { 00:20:20.869 "nsid": 1, 00:20:20.869 "bdev_name": "malloc0", 00:20:20.869 "nguid": "22477BEBB7EF4A7681DEBC6EAF199A12", 00:20:20.869 "uuid": "22477beb-b7ef-4a76-81de-bc6eaf199a12", 00:20:20.869 "no_auto_visible": false 00:20:20.869 } 00:20:20.869 } 00:20:20.869 }, 00:20:20.869 { 00:20:20.869 "method": "nvmf_subsystem_add_listener", 00:20:20.869 "params": { 00:20:20.869 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.869 "listen_address": { 00:20:20.869 "trtype": "TCP", 00:20:20.869 "adrfam": "IPv4", 00:20:20.869 "traddr": "10.0.0.2", 00:20:20.869 "trsvcid": "4420" 00:20:20.869 }, 00:20:20.869 "secure_channel": true 00:20:20.869 } 00:20:20.869 } 00:20:20.869 ] 00:20:20.869 } 00:20:20.869 ] 00:20:20.869 }' 00:20:20.869 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:20.869 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.869 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=928962 00:20:20.869 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:20.869 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 928962 00:20:20.869 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 928962 ']' 00:20:20.869 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.869 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:20.869 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.869 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:20.869 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.869 [2024-07-25 19:11:13.195197] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:20.869 [2024-07-25 19:11:13.195283] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.869 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.869 [2024-07-25 19:11:13.273746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.126 [2024-07-25 19:11:13.388241] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.126 [2024-07-25 19:11:13.388301] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.126 [2024-07-25 19:11:13.388317] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:21.126 [2024-07-25 19:11:13.388331] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:21.126 [2024-07-25 19:11:13.388342] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.126 [2024-07-25 19:11:13.388441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.383 [2024-07-25 19:11:13.622234] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.383 [2024-07-25 19:11:13.647726] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:21.383 [2024-07-25 19:11:13.663792] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:21.383 [2024-07-25 19:11:13.664013] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.950 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:21.950 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:21.950 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:21.950 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:21.950 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.950 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.950 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=929112 00:20:21.950 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 929112 /var/tmp/bdevperf.sock 00:20:21.950 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 929112 ']' 00:20:21.950 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.950 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:21.950 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:21.950 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.950 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:20:21.950 "subsystems": [ 00:20:21.950 { 00:20:21.950 "subsystem": "keyring", 00:20:21.950 "config": [] 00:20:21.950 }, 00:20:21.950 { 00:20:21.950 "subsystem": "iobuf", 00:20:21.950 "config": [ 00:20:21.950 { 00:20:21.950 "method": "iobuf_set_options", 00:20:21.950 "params": { 00:20:21.950 "small_pool_count": 8192, 00:20:21.950 "large_pool_count": 1024, 00:20:21.950 "small_bufsize": 8192, 00:20:21.950 "large_bufsize": 135168 00:20:21.950 } 00:20:21.950 } 00:20:21.950 ] 00:20:21.950 }, 00:20:21.950 { 00:20:21.950 "subsystem": "sock", 00:20:21.950 "config": [ 00:20:21.950 { 00:20:21.950 "method": "sock_set_default_impl", 00:20:21.950 "params": { 00:20:21.950 "impl_name": "posix" 00:20:21.950 } 00:20:21.950 }, 00:20:21.950 { 00:20:21.950 "method": "sock_impl_set_options", 00:20:21.950 "params": { 00:20:21.950 "impl_name": "ssl", 00:20:21.950 "recv_buf_size": 4096, 00:20:21.950 "send_buf_size": 4096, 00:20:21.950 "enable_recv_pipe": true, 00:20:21.950 "enable_quickack": false, 00:20:21.950 "enable_placement_id": 0, 00:20:21.950 "enable_zerocopy_send_server": true, 00:20:21.950 "enable_zerocopy_send_client": false, 00:20:21.950 "zerocopy_threshold": 0, 00:20:21.950 "tls_version": 0, 00:20:21.950 "enable_ktls": false 00:20:21.950 } 00:20:21.950 }, 00:20:21.950 { 00:20:21.950 "method": "sock_impl_set_options", 00:20:21.950 "params": { 00:20:21.950 "impl_name": "posix", 00:20:21.950 "recv_buf_size": 2097152, 00:20:21.950 "send_buf_size": 2097152, 00:20:21.950 "enable_recv_pipe": true, 00:20:21.950 "enable_quickack": false, 00:20:21.950 "enable_placement_id": 0, 00:20:21.950 "enable_zerocopy_send_server": true, 00:20:21.950 "enable_zerocopy_send_client": false, 00:20:21.950 "zerocopy_threshold": 0, 00:20:21.950 "tls_version": 0, 00:20:21.950 "enable_ktls": false 00:20:21.950 } 00:20:21.950 } 00:20:21.950 ] 00:20:21.950 }, 00:20:21.950 { 00:20:21.950 "subsystem": "vmd", 00:20:21.950 "config": [] 00:20:21.950 }, 00:20:21.950 { 00:20:21.950 "subsystem": "accel", 00:20:21.950 "config": [ 00:20:21.950 { 00:20:21.950 "method": "accel_set_options", 00:20:21.950 "params": { 00:20:21.950 "small_cache_size": 128, 00:20:21.950 "large_cache_size": 16, 00:20:21.950 "task_count": 2048, 00:20:21.950 "sequence_count": 2048, 00:20:21.950 "buf_count": 2048 00:20:21.950 } 00:20:21.950 } 00:20:21.950 ] 00:20:21.950 }, 00:20:21.950 { 00:20:21.950 "subsystem": "bdev", 00:20:21.950 "config": [ 00:20:21.950 { 00:20:21.950 "method": "bdev_set_options", 00:20:21.950 "params": { 00:20:21.950 "bdev_io_pool_size": 65535, 00:20:21.950 "bdev_io_cache_size": 256, 00:20:21.951 "bdev_auto_examine": true, 00:20:21.951 "iobuf_small_cache_size": 128, 00:20:21.951 "iobuf_large_cache_size": 16 00:20:21.951 } 00:20:21.951 }, 00:20:21.951 { 00:20:21.951 "method": "bdev_raid_set_options", 00:20:21.951 "params": { 00:20:21.951 "process_window_size_kb": 1024, 00:20:21.951 "process_max_bandwidth_mb_sec": 0 00:20:21.951 } 00:20:21.951 }, 00:20:21.951 { 00:20:21.951 "method": "bdev_iscsi_set_options", 00:20:21.951 "params": { 00:20:21.951 "timeout_sec": 30 00:20:21.951 } 00:20:21.951 }, 00:20:21.951 { 00:20:21.951 "method": "bdev_nvme_set_options", 00:20:21.951 "params": { 00:20:21.951 "action_on_timeout": "none", 00:20:21.951 "timeout_us": 0, 00:20:21.951 "timeout_admin_us": 0, 00:20:21.951 "keep_alive_timeout_ms": 10000, 00:20:21.951 "arbitration_burst": 0, 00:20:21.951 "low_priority_weight": 0, 00:20:21.951 "medium_priority_weight": 0, 00:20:21.951 "high_priority_weight": 0, 00:20:21.951 "nvme_adminq_poll_period_us": 10000, 00:20:21.951 "nvme_ioq_poll_period_us": 0, 00:20:21.951 "io_queue_requests": 512, 00:20:21.951 "delay_cmd_submit": true, 00:20:21.951 "transport_retry_count": 4, 00:20:21.951 "bdev_retry_count": 3, 00:20:21.951 "transport_ack_timeout": 0, 00:20:21.951 "ctrlr_loss_timeout_sec": 0, 00:20:21.951 "reconnect_delay_sec": 0, 00:20:21.951 "fast_io_fail_timeout_sec": 0, 00:20:21.951 "disable_auto_failback": false, 00:20:21.951 "generate_uuids": false, 00:20:21.951 "transport_tos": 0, 00:20:21.951 "nvme_error_stat": false, 00:20:21.951 "rdma_srq_size": 0, 00:20:21.951 "io_path_stat": false, 00:20:21.951 "allow_accel_sequence": false, 00:20:21.951 "rdma_max_cq_size": 0, 00:20:21.951 "rdma_cm_event_timeout_ms": 0, 00:20:21.951 "dhchap_digests": [ 00:20:21.951 "sha256", 00:20:21.951 "sha384", 00:20:21.951 "sha512" 00:20:21.951 ], 00:20:21.951 "dhchap_dhgroups": [ 00:20:21.951 "null", 00:20:21.951 "ffdhe2048", 00:20:21.951 "ffdhe3072", 00:20:21.951 "ffdhe4096", 00:20:21.951 "ffdhe6144", 00:20:21.951 "ffdhe8192" 00:20:21.951 ] 00:20:21.951 } 00:20:21.951 }, 00:20:21.951 { 00:20:21.951 "method": "bdev_nvme_attach_controller", 00:20:21.951 "params": { 00:20:21.951 "name": "TLSTEST", 00:20:21.951 "trtype": "TCP", 00:20:21.951 "adrfam": "IPv4", 00:20:21.951 "traddr": "10.0.0.2", 00:20:21.951 "trsvcid": "4420", 00:20:21.951 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.951 "prchk_reftag": false, 00:20:21.951 "prchk_guard": false, 00:20:21.951 "ctrlr_loss_timeout_sec": 0, 00:20:21.951 "reconnect_delay_sec": 0, 00:20:21.951 "fast_io_fail_timeout_sec": 0, 00:20:21.951 "psk": "/tmp/tmp.iquDBUIap0", 00:20:21.951 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:21.951 "hdgst": false, 00:20:21.951 "ddgst": false 00:20:21.951 } 00:20:21.951 }, 00:20:21.951 { 00:20:21.951 "method": "bdev_nvme_set_hotplug", 00:20:21.951 "params": { 00:20:21.951 "period_us": 100000, 00:20:21.951 "enable": false 00:20:21.951 } 00:20:21.951 }, 00:20:21.951 { 00:20:21.951 "method": "bdev_wait_for_examine" 00:20:21.951 } 00:20:21.951 ] 00:20:21.951 }, 00:20:21.951 { 00:20:21.951 "subsystem": "nbd", 00:20:21.951 "config": [] 00:20:21.951 } 00:20:21.951 ] 00:20:21.951 }' 00:20:21.951 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:21.951 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.951 [2024-07-25 19:11:14.236428] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:21.951 [2024-07-25 19:11:14.236506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929112 ] 00:20:21.951 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.951 [2024-07-25 19:11:14.304536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.951 [2024-07-25 19:11:14.410607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.209 [2024-07-25 19:11:14.582911] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:22.209 [2024-07-25 19:11:14.583062] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:22.773 19:11:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:22.773 19:11:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:22.773 19:11:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:23.031 Running I/O for 10 seconds... 00:20:32.992 00:20:32.992 Latency(us) 00:20:32.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.992 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:32.992 Verification LBA range: start 0x0 length 0x2000 00:20:32.992 TLSTESTn1 : 10.08 1301.15 5.08 0.00 0.00 98043.91 7573.05 83886.08 00:20:32.992 =================================================================================================================== 00:20:32.992 Total : 1301.15 5.08 0.00 0.00 98043.91 7573.05 83886.08 00:20:32.992 0 00:20:32.992 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:32.992 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 929112 00:20:32.992 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 929112 ']' 00:20:32.992 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 929112 00:20:32.992 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:32.992 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:32.992 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 929112 00:20:32.992 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:32.993 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:32.993 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 929112' 00:20:32.993 killing process with pid 929112 00:20:32.993 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 929112 00:20:32.993 Received shutdown signal, test time was about 10.000000 seconds 00:20:32.993 00:20:32.993 Latency(us) 00:20:32.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.993 =================================================================================================================== 00:20:32.993 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:32.993 [2024-07-25 19:11:25.445682] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:32.993 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 929112 00:20:33.251 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 928962 00:20:33.251 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 928962 ']' 00:20:33.251 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 928962 00:20:33.251 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:33.251 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:33.251 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 928962 00:20:33.509 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:33.509 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:33.509 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 928962' 00:20:33.509 killing process with pid 928962 00:20:33.509 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 928962 00:20:33.509 [2024-07-25 19:11:25.746267] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:33.509 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 928962 00:20:33.767 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:20:33.767 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:33.767 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:33.767 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.767 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=930467 00:20:33.767 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:33.767 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 930467 00:20:33.767 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 930467 ']' 00:20:33.767 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.767 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:33.767 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.767 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:33.767 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.767 [2024-07-25 19:11:26.096911] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:33.767 [2024-07-25 19:11:26.097021] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.767 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.767 [2024-07-25 19:11:26.177141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.025 [2024-07-25 19:11:26.291242] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.025 [2024-07-25 19:11:26.291306] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.025 [2024-07-25 19:11:26.291332] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.025 [2024-07-25 19:11:26.291346] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.025 [2024-07-25 19:11:26.291357] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.025 [2024-07-25 19:11:26.291393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.590 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:34.590 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:34.590 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:34.590 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:34.590 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.848 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.848 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.iquDBUIap0 00:20:34.848 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.iquDBUIap0 00:20:34.848 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:35.106 [2024-07-25 19:11:27.344175] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.106 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:35.364 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:35.622 [2024-07-25 19:11:27.921676] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:35.622 [2024-07-25 19:11:27.921903] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.622 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:35.880 malloc0 00:20:35.880 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:36.138 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iquDBUIap0 00:20:36.396 [2024-07-25 19:11:28.691594] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:36.396 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=930778 00:20:36.396 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:36.396 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:36.396 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 930778 /var/tmp/bdevperf.sock 00:20:36.396 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 930778 ']' 00:20:36.396 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.396 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:36.396 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.396 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:36.396 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.396 [2024-07-25 19:11:28.756546] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:36.396 [2024-07-25 19:11:28.756626] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid930778 ] 00:20:36.397 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.397 [2024-07-25 19:11:28.831197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.655 [2024-07-25 19:11:28.949897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:37.588 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:37.588 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:37.588 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iquDBUIap0 00:20:37.588 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:37.846 [2024-07-25 19:11:30.196941] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.846 nvme0n1 00:20:37.846 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:38.104 Running I/O for 1 seconds... 00:20:39.037 00:20:39.037 Latency(us) 00:20:39.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.037 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:39.037 Verification LBA range: start 0x0 length 0x2000 00:20:39.037 nvme0n1 : 1.06 1879.09 7.34 0.00 0.00 66319.00 7961.41 95536.92 00:20:39.037 =================================================================================================================== 00:20:39.037 Total : 1879.09 7.34 0.00 0.00 66319.00 7961.41 95536.92 00:20:39.037 0 00:20:39.037 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 930778 00:20:39.037 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 930778 ']' 00:20:39.037 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 930778 00:20:39.037 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:39.037 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:39.037 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 930778 00:20:39.295 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:39.295 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:39.295 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 930778' 00:20:39.295 killing process with pid 930778 00:20:39.295 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 930778 00:20:39.295 Received shutdown signal, test time was about 1.000000 seconds 00:20:39.295 00:20:39.295 Latency(us) 00:20:39.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.295 =================================================================================================================== 00:20:39.295 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:39.295 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 930778 00:20:39.553 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 930467 00:20:39.553 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 930467 ']' 00:20:39.553 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 930467 00:20:39.553 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:39.553 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:39.553 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 930467 00:20:39.553 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:39.553 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:39.553 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 930467' 00:20:39.553 killing process with pid 930467 00:20:39.553 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 930467 00:20:39.553 [2024-07-25 19:11:31.832500] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:39.553 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 930467 00:20:39.811 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:20:39.811 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:39.811 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:39.811 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.811 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=931196 00:20:39.811 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:39.811 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 931196 00:20:39.811 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 931196 ']' 00:20:39.811 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.811 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:39.811 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.811 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:39.811 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.811 [2024-07-25 19:11:32.174238] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:39.811 [2024-07-25 19:11:32.174338] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.811 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.811 [2024-07-25 19:11:32.248621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.069 [2024-07-25 19:11:32.355162] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.069 [2024-07-25 19:11:32.355214] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.069 [2024-07-25 19:11:32.355234] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.069 [2024-07-25 19:11:32.355245] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.069 [2024-07-25 19:11:32.355254] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.069 [2024-07-25 19:11:32.355280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.034 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:41.034 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:41.034 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:41.034 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:41.034 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.035 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.035 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:20:41.035 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.035 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.035 [2024-07-25 19:11:33.189924] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.035 malloc0 00:20:41.035 [2024-07-25 19:11:33.223506] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:41.035 [2024-07-25 19:11:33.239290] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.035 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.035 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=931359 00:20:41.035 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:41.035 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 931359 /var/tmp/bdevperf.sock 00:20:41.035 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 931359 ']' 00:20:41.035 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:41.035 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:41.035 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:41.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:41.035 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:41.035 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.035 [2024-07-25 19:11:33.306623] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:41.035 [2024-07-25 19:11:33.306701] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid931359 ] 00:20:41.035 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.035 [2024-07-25 19:11:33.378578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.294 [2024-07-25 19:11:33.497731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:41.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:41.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iquDBUIap0 00:20:41.551 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:41.809 [2024-07-25 19:11:34.104547] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:41.809 nvme0n1 00:20:41.809 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:42.067 Running I/O for 1 seconds... 00:20:42.999 00:20:42.999 Latency(us) 00:20:42.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.999 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:42.999 Verification LBA range: start 0x0 length 0x2000 00:20:42.999 nvme0n1 : 1.06 1913.80 7.48 0.00 0.00 65295.90 6456.51 104080.88 00:20:42.999 =================================================================================================================== 00:20:42.999 Total : 1913.80 7.48 0.00 0.00 65295.90 6456.51 104080.88 00:20:42.999 0 00:20:42.999 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:20:42.999 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.999 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.257 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.257 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:20:43.257 "subsystems": [ 00:20:43.257 { 00:20:43.257 "subsystem": "keyring", 00:20:43.257 "config": [ 00:20:43.257 { 00:20:43.257 "method": "keyring_file_add_key", 00:20:43.257 "params": { 00:20:43.257 "name": "key0", 00:20:43.257 "path": "/tmp/tmp.iquDBUIap0" 00:20:43.257 } 00:20:43.257 } 00:20:43.257 ] 00:20:43.257 }, 00:20:43.257 { 00:20:43.257 "subsystem": "iobuf", 00:20:43.257 "config": [ 00:20:43.257 { 00:20:43.257 "method": "iobuf_set_options", 00:20:43.257 "params": { 00:20:43.257 "small_pool_count": 8192, 00:20:43.257 "large_pool_count": 1024, 00:20:43.257 "small_bufsize": 8192, 00:20:43.257 "large_bufsize": 135168 00:20:43.257 } 00:20:43.257 } 00:20:43.258 ] 00:20:43.258 }, 00:20:43.258 { 00:20:43.258 "subsystem": "sock", 00:20:43.258 "config": [ 00:20:43.258 { 00:20:43.258 "method": "sock_set_default_impl", 00:20:43.258 "params": { 00:20:43.258 "impl_name": "posix" 00:20:43.258 } 00:20:43.258 }, 00:20:43.258 { 00:20:43.258 "method": "sock_impl_set_options", 00:20:43.258 "params": { 00:20:43.258 "impl_name": "ssl", 00:20:43.258 "recv_buf_size": 4096, 00:20:43.258 "send_buf_size": 4096, 00:20:43.258 "enable_recv_pipe": true, 00:20:43.258 "enable_quickack": false, 00:20:43.258 "enable_placement_id": 0, 00:20:43.258 "enable_zerocopy_send_server": true, 00:20:43.258 "enable_zerocopy_send_client": false, 00:20:43.258 "zerocopy_threshold": 0, 00:20:43.258 "tls_version": 0, 00:20:43.258 "enable_ktls": false 00:20:43.258 } 00:20:43.258 }, 00:20:43.258 { 00:20:43.258 "method": "sock_impl_set_options", 00:20:43.258 "params": { 00:20:43.258 "impl_name": "posix", 00:20:43.258 "recv_buf_size": 2097152, 00:20:43.258 "send_buf_size": 2097152, 00:20:43.258 "enable_recv_pipe": true, 00:20:43.258 "enable_quickack": false, 00:20:43.258 "enable_placement_id": 0, 00:20:43.258 "enable_zerocopy_send_server": true, 00:20:43.258 "enable_zerocopy_send_client": false, 00:20:43.258 "zerocopy_threshold": 0, 00:20:43.258 "tls_version": 0, 00:20:43.258 "enable_ktls": false 00:20:43.258 } 00:20:43.258 } 00:20:43.258 ] 00:20:43.258 }, 00:20:43.258 { 00:20:43.258 "subsystem": "vmd", 00:20:43.258 "config": [] 00:20:43.258 }, 00:20:43.258 { 00:20:43.258 "subsystem": "accel", 00:20:43.258 "config": [ 00:20:43.258 { 00:20:43.258 "method": "accel_set_options", 00:20:43.258 "params": { 00:20:43.258 "small_cache_size": 128, 00:20:43.258 "large_cache_size": 16, 00:20:43.258 "task_count": 2048, 00:20:43.258 "sequence_count": 2048, 00:20:43.258 "buf_count": 2048 00:20:43.258 } 00:20:43.258 } 00:20:43.258 ] 00:20:43.258 }, 00:20:43.258 { 00:20:43.258 "subsystem": "bdev", 00:20:43.258 "config": [ 00:20:43.258 { 00:20:43.258 "method": "bdev_set_options", 00:20:43.258 "params": { 00:20:43.258 "bdev_io_pool_size": 65535, 00:20:43.258 "bdev_io_cache_size": 256, 00:20:43.258 "bdev_auto_examine": true, 00:20:43.258 "iobuf_small_cache_size": 128, 00:20:43.258 "iobuf_large_cache_size": 16 00:20:43.258 } 00:20:43.258 }, 00:20:43.258 { 00:20:43.258 "method": "bdev_raid_set_options", 00:20:43.258 "params": { 00:20:43.258 "process_window_size_kb": 1024, 00:20:43.258 "process_max_bandwidth_mb_sec": 0 00:20:43.258 } 00:20:43.258 }, 00:20:43.258 { 00:20:43.258 "method": "bdev_iscsi_set_options", 00:20:43.258 "params": { 00:20:43.258 "timeout_sec": 30 00:20:43.258 } 00:20:43.258 }, 00:20:43.258 { 00:20:43.258 "method": "bdev_nvme_set_options", 00:20:43.258 "params": { 00:20:43.258 "action_on_timeout": "none", 00:20:43.258 "timeout_us": 0, 00:20:43.258 "timeout_admin_us": 0, 00:20:43.258 "keep_alive_timeout_ms": 10000, 00:20:43.258 "arbitration_burst": 0, 00:20:43.258 "low_priority_weight": 0, 00:20:43.258 "medium_priority_weight": 0, 00:20:43.258 "high_priority_weight": 0, 00:20:43.258 "nvme_adminq_poll_period_us": 10000, 00:20:43.258 "nvme_ioq_poll_period_us": 0, 00:20:43.258 "io_queue_requests": 0, 00:20:43.258 "delay_cmd_submit": true, 00:20:43.258 "transport_retry_count": 4, 00:20:43.258 "bdev_retry_count": 3, 00:20:43.258 "transport_ack_timeout": 0, 00:20:43.258 "ctrlr_loss_timeout_sec": 0, 00:20:43.258 "reconnect_delay_sec": 0, 00:20:43.258 "fast_io_fail_timeout_sec": 0, 00:20:43.258 "disable_auto_failback": false, 00:20:43.258 "generate_uuids": false, 00:20:43.258 "transport_tos": 0, 00:20:43.258 "nvme_error_stat": false, 00:20:43.258 "rdma_srq_size": 0, 00:20:43.258 "io_path_stat": false, 00:20:43.258 "allow_accel_sequence": false, 00:20:43.258 "rdma_max_cq_size": 0, 00:20:43.258 "rdma_cm_event_timeout_ms": 0, 00:20:43.258 "dhchap_digests": [ 00:20:43.258 "sha256", 00:20:43.258 "sha384", 00:20:43.258 "sha512" 00:20:43.258 ], 00:20:43.258 "dhchap_dhgroups": [ 00:20:43.258 "null", 00:20:43.258 "ffdhe2048", 00:20:43.258 "ffdhe3072", 00:20:43.258 "ffdhe4096", 00:20:43.258 "ffdhe6144", 00:20:43.258 "ffdhe8192" 00:20:43.258 ] 00:20:43.258 } 00:20:43.258 }, 00:20:43.258 { 00:20:43.258 "method": "bdev_nvme_set_hotplug", 00:20:43.258 "params": { 00:20:43.258 "period_us": 100000, 00:20:43.258 "enable": false 00:20:43.258 } 00:20:43.258 }, 00:20:43.258 { 00:20:43.258 "method": "bdev_malloc_create", 00:20:43.258 "params": { 00:20:43.258 "name": "malloc0", 00:20:43.258 "num_blocks": 8192, 00:20:43.258 "block_size": 4096, 00:20:43.258 "physical_block_size": 4096, 00:20:43.258 "uuid": "1a5a30d6-2c2d-41b6-a91a-3219eabbd341", 00:20:43.258 "optimal_io_boundary": 0, 00:20:43.258 "md_size": 0, 00:20:43.258 "dif_type": 0, 00:20:43.258 "dif_is_head_of_md": false, 00:20:43.258 "dif_pi_format": 0 00:20:43.258 } 00:20:43.258 }, 00:20:43.258 { 00:20:43.258 "method": "bdev_wait_for_examine" 00:20:43.258 } 00:20:43.258 ] 00:20:43.258 }, 00:20:43.258 { 00:20:43.258 "subsystem": "nbd", 00:20:43.258 "config": [] 00:20:43.258 }, 00:20:43.258 { 00:20:43.258 "subsystem": "scheduler", 00:20:43.258 "config": [ 00:20:43.258 { 00:20:43.258 "method": "framework_set_scheduler", 00:20:43.258 "params": { 00:20:43.258 "name": "static" 00:20:43.258 } 00:20:43.258 } 00:20:43.258 ] 00:20:43.258 }, 00:20:43.258 { 00:20:43.258 "subsystem": "nvmf", 00:20:43.258 "config": [ 00:20:43.258 { 00:20:43.258 "method": "nvmf_set_config", 00:20:43.258 "params": { 00:20:43.258 "discovery_filter": "match_any", 00:20:43.258 "admin_cmd_passthru": { 00:20:43.258 "identify_ctrlr": false 00:20:43.258 } 00:20:43.258 } 00:20:43.258 }, 00:20:43.258 { 00:20:43.258 "method": "nvmf_set_max_subsystems", 00:20:43.258 "params": { 00:20:43.258 "max_subsystems": 1024 00:20:43.258 } 00:20:43.258 }, 00:20:43.258 { 00:20:43.258 "method": "nvmf_set_crdt", 00:20:43.258 "params": { 00:20:43.258 "crdt1": 0, 00:20:43.258 "crdt2": 0, 00:20:43.258 "crdt3": 0 00:20:43.258 } 00:20:43.258 }, 00:20:43.258 { 00:20:43.258 "method": "nvmf_create_transport", 00:20:43.258 "params": { 00:20:43.258 "trtype": "TCP", 00:20:43.258 "max_queue_depth": 128, 00:20:43.258 "max_io_qpairs_per_ctrlr": 127, 00:20:43.258 "in_capsule_data_size": 4096, 00:20:43.258 "max_io_size": 131072, 00:20:43.258 "io_unit_size": 131072, 00:20:43.258 "max_aq_depth": 128, 00:20:43.258 "num_shared_buffers": 511, 00:20:43.258 "buf_cache_size": 4294967295, 00:20:43.258 "dif_insert_or_strip": false, 00:20:43.258 "zcopy": false, 00:20:43.258 "c2h_success": false, 00:20:43.258 "sock_priority": 0, 00:20:43.258 "abort_timeout_sec": 1, 00:20:43.258 "ack_timeout": 0, 00:20:43.258 "data_wr_pool_size": 0 00:20:43.258 } 00:20:43.258 }, 00:20:43.258 { 00:20:43.258 "method": "nvmf_create_subsystem", 00:20:43.258 "params": { 00:20:43.258 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:43.258 "allow_any_host": false, 00:20:43.258 "serial_number": "00000000000000000000", 00:20:43.258 "model_number": "SPDK bdev Controller", 00:20:43.258 "max_namespaces": 32, 00:20:43.258 "min_cntlid": 1, 00:20:43.258 "max_cntlid": 65519, 00:20:43.258 "ana_reporting": false 00:20:43.258 } 00:20:43.258 }, 00:20:43.258 { 00:20:43.258 "method": "nvmf_subsystem_add_host", 00:20:43.258 "params": { 00:20:43.258 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:43.258 "host": "nqn.2016-06.io.spdk:host1", 00:20:43.258 "psk": "key0" 00:20:43.258 } 00:20:43.258 }, 00:20:43.258 { 00:20:43.258 "method": "nvmf_subsystem_add_ns", 00:20:43.258 "params": { 00:20:43.258 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:43.258 "namespace": { 00:20:43.258 "nsid": 1, 00:20:43.258 "bdev_name": "malloc0", 00:20:43.258 "nguid": "1A5A30D62C2D41B6A91A3219EABBD341", 00:20:43.258 "uuid": "1a5a30d6-2c2d-41b6-a91a-3219eabbd341", 00:20:43.258 "no_auto_visible": false 00:20:43.258 } 00:20:43.258 } 00:20:43.258 }, 00:20:43.258 { 00:20:43.258 "method": "nvmf_subsystem_add_listener", 00:20:43.258 "params": { 00:20:43.258 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:43.258 "listen_address": { 00:20:43.258 "trtype": "TCP", 00:20:43.258 "adrfam": "IPv4", 00:20:43.258 "traddr": "10.0.0.2", 00:20:43.258 "trsvcid": "4420" 00:20:43.258 }, 00:20:43.258 "secure_channel": false, 00:20:43.258 "sock_impl": "ssl" 00:20:43.258 } 00:20:43.258 } 00:20:43.258 ] 00:20:43.258 } 00:20:43.258 ] 00:20:43.259 }' 00:20:43.259 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:43.517 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:20:43.517 "subsystems": [ 00:20:43.517 { 00:20:43.517 "subsystem": "keyring", 00:20:43.517 "config": [ 00:20:43.517 { 00:20:43.517 "method": "keyring_file_add_key", 00:20:43.517 "params": { 00:20:43.517 "name": "key0", 00:20:43.517 "path": "/tmp/tmp.iquDBUIap0" 00:20:43.517 } 00:20:43.517 } 00:20:43.517 ] 00:20:43.517 }, 00:20:43.517 { 00:20:43.517 "subsystem": "iobuf", 00:20:43.517 "config": [ 00:20:43.517 { 00:20:43.517 "method": "iobuf_set_options", 00:20:43.517 "params": { 00:20:43.517 "small_pool_count": 8192, 00:20:43.517 "large_pool_count": 1024, 00:20:43.517 "small_bufsize": 8192, 00:20:43.517 "large_bufsize": 135168 00:20:43.517 } 00:20:43.517 } 00:20:43.517 ] 00:20:43.517 }, 00:20:43.517 { 00:20:43.517 "subsystem": "sock", 00:20:43.517 "config": [ 00:20:43.517 { 00:20:43.517 "method": "sock_set_default_impl", 00:20:43.517 "params": { 00:20:43.517 "impl_name": "posix" 00:20:43.517 } 00:20:43.517 }, 00:20:43.517 { 00:20:43.517 "method": "sock_impl_set_options", 00:20:43.517 "params": { 00:20:43.517 "impl_name": "ssl", 00:20:43.517 "recv_buf_size": 4096, 00:20:43.517 "send_buf_size": 4096, 00:20:43.517 "enable_recv_pipe": true, 00:20:43.517 "enable_quickack": false, 00:20:43.517 "enable_placement_id": 0, 00:20:43.517 "enable_zerocopy_send_server": true, 00:20:43.517 "enable_zerocopy_send_client": false, 00:20:43.517 "zerocopy_threshold": 0, 00:20:43.517 "tls_version": 0, 00:20:43.517 "enable_ktls": false 00:20:43.517 } 00:20:43.517 }, 00:20:43.517 { 00:20:43.517 "method": "sock_impl_set_options", 00:20:43.517 "params": { 00:20:43.517 "impl_name": "posix", 00:20:43.517 "recv_buf_size": 2097152, 00:20:43.517 "send_buf_size": 2097152, 00:20:43.517 "enable_recv_pipe": true, 00:20:43.517 "enable_quickack": false, 00:20:43.517 "enable_placement_id": 0, 00:20:43.517 "enable_zerocopy_send_server": true, 00:20:43.517 "enable_zerocopy_send_client": false, 00:20:43.517 "zerocopy_threshold": 0, 00:20:43.517 "tls_version": 0, 00:20:43.517 "enable_ktls": false 00:20:43.517 } 00:20:43.517 } 00:20:43.517 ] 00:20:43.517 }, 00:20:43.517 { 00:20:43.517 "subsystem": "vmd", 00:20:43.517 "config": [] 00:20:43.517 }, 00:20:43.517 { 00:20:43.517 "subsystem": "accel", 00:20:43.517 "config": [ 00:20:43.517 { 00:20:43.517 "method": "accel_set_options", 00:20:43.517 "params": { 00:20:43.517 "small_cache_size": 128, 00:20:43.517 "large_cache_size": 16, 00:20:43.517 "task_count": 2048, 00:20:43.517 "sequence_count": 2048, 00:20:43.517 "buf_count": 2048 00:20:43.517 } 00:20:43.517 } 00:20:43.517 ] 00:20:43.517 }, 00:20:43.517 { 00:20:43.517 "subsystem": "bdev", 00:20:43.517 "config": [ 00:20:43.517 { 00:20:43.517 "method": "bdev_set_options", 00:20:43.517 "params": { 00:20:43.517 "bdev_io_pool_size": 65535, 00:20:43.517 "bdev_io_cache_size": 256, 00:20:43.518 "bdev_auto_examine": true, 00:20:43.518 "iobuf_small_cache_size": 128, 00:20:43.518 "iobuf_large_cache_size": 16 00:20:43.518 } 00:20:43.518 }, 00:20:43.518 { 00:20:43.518 "method": "bdev_raid_set_options", 00:20:43.518 "params": { 00:20:43.518 "process_window_size_kb": 1024, 00:20:43.518 "process_max_bandwidth_mb_sec": 0 00:20:43.518 } 00:20:43.518 }, 00:20:43.518 { 00:20:43.518 "method": "bdev_iscsi_set_options", 00:20:43.518 "params": { 00:20:43.518 "timeout_sec": 30 00:20:43.518 } 00:20:43.518 }, 00:20:43.518 { 00:20:43.518 "method": "bdev_nvme_set_options", 00:20:43.518 "params": { 00:20:43.518 "action_on_timeout": "none", 00:20:43.518 "timeout_us": 0, 00:20:43.518 "timeout_admin_us": 0, 00:20:43.518 "keep_alive_timeout_ms": 10000, 00:20:43.518 "arbitration_burst": 0, 00:20:43.518 "low_priority_weight": 0, 00:20:43.518 "medium_priority_weight": 0, 00:20:43.518 "high_priority_weight": 0, 00:20:43.518 "nvme_adminq_poll_period_us": 10000, 00:20:43.518 "nvme_ioq_poll_period_us": 0, 00:20:43.518 "io_queue_requests": 512, 00:20:43.518 "delay_cmd_submit": true, 00:20:43.518 "transport_retry_count": 4, 00:20:43.518 "bdev_retry_count": 3, 00:20:43.518 "transport_ack_timeout": 0, 00:20:43.518 "ctrlr_loss_timeout_sec": 0, 00:20:43.518 "reconnect_delay_sec": 0, 00:20:43.518 "fast_io_fail_timeout_sec": 0, 00:20:43.518 "disable_auto_failback": false, 00:20:43.518 "generate_uuids": false, 00:20:43.518 "transport_tos": 0, 00:20:43.518 "nvme_error_stat": false, 00:20:43.518 "rdma_srq_size": 0, 00:20:43.518 "io_path_stat": false, 00:20:43.518 "allow_accel_sequence": false, 00:20:43.518 "rdma_max_cq_size": 0, 00:20:43.518 "rdma_cm_event_timeout_ms": 0, 00:20:43.518 "dhchap_digests": [ 00:20:43.518 "sha256", 00:20:43.518 "sha384", 00:20:43.518 "sha512" 00:20:43.518 ], 00:20:43.518 "dhchap_dhgroups": [ 00:20:43.518 "null", 00:20:43.518 "ffdhe2048", 00:20:43.518 "ffdhe3072", 00:20:43.518 "ffdhe4096", 00:20:43.518 "ffdhe6144", 00:20:43.518 "ffdhe8192" 00:20:43.518 ] 00:20:43.518 } 00:20:43.518 }, 00:20:43.518 { 00:20:43.518 "method": "bdev_nvme_attach_controller", 00:20:43.518 "params": { 00:20:43.518 "name": "nvme0", 00:20:43.518 "trtype": "TCP", 00:20:43.518 "adrfam": "IPv4", 00:20:43.518 "traddr": "10.0.0.2", 00:20:43.518 "trsvcid": "4420", 00:20:43.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:43.518 "prchk_reftag": false, 00:20:43.518 "prchk_guard": false, 00:20:43.518 "ctrlr_loss_timeout_sec": 0, 00:20:43.518 "reconnect_delay_sec": 0, 00:20:43.518 "fast_io_fail_timeout_sec": 0, 00:20:43.518 "psk": "key0", 00:20:43.518 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:43.518 "hdgst": false, 00:20:43.518 "ddgst": false 00:20:43.518 } 00:20:43.518 }, 00:20:43.518 { 00:20:43.518 "method": "bdev_nvme_set_hotplug", 00:20:43.518 "params": { 00:20:43.518 "period_us": 100000, 00:20:43.518 "enable": false 00:20:43.518 } 00:20:43.518 }, 00:20:43.518 { 00:20:43.518 "method": "bdev_enable_histogram", 00:20:43.518 "params": { 00:20:43.518 "name": "nvme0n1", 00:20:43.518 "enable": true 00:20:43.518 } 00:20:43.518 }, 00:20:43.518 { 00:20:43.518 "method": "bdev_wait_for_examine" 00:20:43.518 } 00:20:43.518 ] 00:20:43.518 }, 00:20:43.518 { 00:20:43.518 "subsystem": "nbd", 00:20:43.518 "config": [] 00:20:43.518 } 00:20:43.518 ] 00:20:43.518 }' 00:20:43.518 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 931359 00:20:43.518 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 931359 ']' 00:20:43.518 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 931359 00:20:43.518 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:43.518 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:43.518 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 931359 00:20:43.518 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:43.518 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:43.518 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 931359' 00:20:43.518 killing process with pid 931359 00:20:43.518 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 931359 00:20:43.518 Received shutdown signal, test time was about 1.000000 seconds 00:20:43.518 00:20:43.518 Latency(us) 00:20:43.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.518 =================================================================================================================== 00:20:43.518 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:43.518 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 931359 00:20:43.776 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 931196 00:20:43.776 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 931196 ']' 00:20:43.776 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 931196 00:20:43.776 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:43.776 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:43.776 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 931196 00:20:43.776 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:43.776 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:43.776 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 931196' 00:20:43.776 killing process with pid 931196 00:20:43.776 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 931196 00:20:43.776 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 931196 00:20:44.035 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:20:44.035 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:44.035 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:20:44.035 "subsystems": [ 00:20:44.035 { 00:20:44.035 "subsystem": "keyring", 00:20:44.035 "config": [ 00:20:44.035 { 00:20:44.035 "method": "keyring_file_add_key", 00:20:44.035 "params": { 00:20:44.035 "name": "key0", 00:20:44.035 "path": "/tmp/tmp.iquDBUIap0" 00:20:44.035 } 00:20:44.035 } 00:20:44.035 ] 00:20:44.035 }, 00:20:44.035 { 00:20:44.035 "subsystem": "iobuf", 00:20:44.035 "config": [ 00:20:44.035 { 00:20:44.035 "method": "iobuf_set_options", 00:20:44.035 "params": { 00:20:44.035 "small_pool_count": 8192, 00:20:44.035 "large_pool_count": 1024, 00:20:44.035 "small_bufsize": 8192, 00:20:44.035 "large_bufsize": 135168 00:20:44.035 } 00:20:44.035 } 00:20:44.035 ] 00:20:44.035 }, 00:20:44.035 { 00:20:44.035 "subsystem": "sock", 00:20:44.035 "config": [ 00:20:44.035 { 00:20:44.035 "method": "sock_set_default_impl", 00:20:44.035 "params": { 00:20:44.035 "impl_name": "posix" 00:20:44.035 } 00:20:44.035 }, 00:20:44.035 { 00:20:44.035 "method": "sock_impl_set_options", 00:20:44.035 "params": { 00:20:44.035 "impl_name": "ssl", 00:20:44.035 "recv_buf_size": 4096, 00:20:44.035 "send_buf_size": 4096, 00:20:44.035 "enable_recv_pipe": true, 00:20:44.035 "enable_quickack": false, 00:20:44.035 "enable_placement_id": 0, 00:20:44.035 "enable_zerocopy_send_server": true, 00:20:44.035 "enable_zerocopy_send_client": false, 00:20:44.035 "zerocopy_threshold": 0, 00:20:44.035 "tls_version": 0, 00:20:44.035 "enable_ktls": false 00:20:44.035 } 00:20:44.035 }, 00:20:44.035 { 00:20:44.035 "method": "sock_impl_set_options", 00:20:44.035 "params": { 00:20:44.035 "impl_name": "posix", 00:20:44.035 "recv_buf_size": 2097152, 00:20:44.035 "send_buf_size": 2097152, 00:20:44.035 "enable_recv_pipe": true, 00:20:44.035 "enable_quickack": false, 00:20:44.035 "enable_placement_id": 0, 00:20:44.035 "enable_zerocopy_send_server": true, 00:20:44.035 "enable_zerocopy_send_client": false, 00:20:44.035 "zerocopy_threshold": 0, 00:20:44.035 "tls_version": 0, 00:20:44.035 "enable_ktls": false 00:20:44.035 } 00:20:44.035 } 00:20:44.035 ] 00:20:44.035 }, 00:20:44.035 { 00:20:44.035 "subsystem": "vmd", 00:20:44.035 "config": [] 00:20:44.035 }, 00:20:44.035 { 00:20:44.035 "subsystem": "accel", 00:20:44.035 "config": [ 00:20:44.035 { 00:20:44.035 "method": "accel_set_options", 00:20:44.035 "params": { 00:20:44.035 "small_cache_size": 128, 00:20:44.035 "large_cache_size": 16, 00:20:44.035 "task_count": 2048, 00:20:44.035 "sequence_count": 2048, 00:20:44.035 "buf_count": 2048 00:20:44.035 } 00:20:44.035 } 00:20:44.035 ] 00:20:44.035 }, 00:20:44.035 { 00:20:44.035 "subsystem": "bdev", 00:20:44.035 "config": [ 00:20:44.035 { 00:20:44.035 "method": "bdev_set_options", 00:20:44.035 "params": { 00:20:44.035 "bdev_io_pool_size": 65535, 00:20:44.035 "bdev_io_cache_size": 256, 00:20:44.035 "bdev_auto_examine": true, 00:20:44.035 "iobuf_small_cache_size": 128, 00:20:44.035 "iobuf_large_cache_size": 16 00:20:44.035 } 00:20:44.035 }, 00:20:44.035 { 00:20:44.035 "method": "bdev_raid_set_options", 00:20:44.035 "params": { 00:20:44.035 "process_window_size_kb": 1024, 00:20:44.035 "process_max_bandwidth_mb_sec": 0 00:20:44.035 } 00:20:44.035 }, 00:20:44.035 { 00:20:44.035 "method": "bdev_iscsi_set_options", 00:20:44.035 "params": { 00:20:44.035 "timeout_sec": 30 00:20:44.035 } 00:20:44.035 }, 00:20:44.035 { 00:20:44.035 "method": "bdev_nvme_set_options", 00:20:44.035 "params": { 00:20:44.035 "action_on_timeout": "none", 00:20:44.035 "timeout_us": 0, 00:20:44.035 "timeout_admin_us": 0, 00:20:44.035 "keep_alive_timeout_ms": 10000, 00:20:44.035 "arbitration_burst": 0, 00:20:44.035 "low_priority_weight": 0, 00:20:44.035 "medium_priority_weight": 0, 00:20:44.035 "high_priority_weight": 0, 00:20:44.035 "nvme_adminq_poll_period_us": 10000, 00:20:44.035 "nvme_ioq_poll_period_us": 0, 00:20:44.035 "io_queue_requests": 0, 00:20:44.035 "delay_cmd_submit": true, 00:20:44.035 "transport_retry_count": 4, 00:20:44.035 "bdev_retry_count": 3, 00:20:44.035 "transport_ack_timeout": 0, 00:20:44.035 "ctrlr_loss_timeout_sec": 0, 00:20:44.035 "reconnect_delay_sec": 0, 00:20:44.035 "fast_io_fail_timeout_sec": 0, 00:20:44.035 "disable_auto_failback": false, 00:20:44.035 "generate_uuids": false, 00:20:44.035 "transport_tos": 0, 00:20:44.035 "nvme_error_stat": false, 00:20:44.035 "rdma_srq_size": 0, 00:20:44.035 "io_path_stat": false, 00:20:44.035 "allow_accel_sequence": false, 00:20:44.035 "rdma_max_cq_size": 0, 00:20:44.035 "rdma_cm_event_timeout_ms": 0, 00:20:44.035 "dhchap_digests": [ 00:20:44.035 "sha256", 00:20:44.035 "sha384", 00:20:44.035 "sha512" 00:20:44.035 ], 00:20:44.035 "dhchap_dhgroups": [ 00:20:44.035 "null", 00:20:44.035 "ffdhe2048", 00:20:44.035 "ffdhe3072", 00:20:44.035 "ffdhe4096", 00:20:44.035 "ffdhe6144", 00:20:44.035 "ffdhe8192" 00:20:44.035 ] 00:20:44.035 } 00:20:44.035 }, 00:20:44.035 { 00:20:44.035 "method": "bdev_nvme_set_hotplug", 00:20:44.035 "params": { 00:20:44.035 "period_us": 100000, 00:20:44.035 "enable": false 00:20:44.035 } 00:20:44.035 }, 00:20:44.035 { 00:20:44.035 "method": "bdev_malloc_create", 00:20:44.035 "params": { 00:20:44.035 "name": "malloc0", 00:20:44.035 "num_blocks": 8192, 00:20:44.035 "block_size": 4096, 00:20:44.035 "physical_block_size": 4096, 00:20:44.035 "uuid": "1a5a30d6-2c2d-41b6-a91a-3219eabbd341", 00:20:44.035 "optimal_io_boundary": 0, 00:20:44.035 "md_size": 0, 00:20:44.035 "dif_type": 0, 00:20:44.035 "dif_is_head_of_md": false, 00:20:44.035 "dif_pi_format": 0 00:20:44.035 } 00:20:44.035 }, 00:20:44.036 { 00:20:44.036 "method": "bdev_wait_for_examine" 00:20:44.036 } 00:20:44.036 ] 00:20:44.036 }, 00:20:44.036 { 00:20:44.036 "subsystem": "nbd", 00:20:44.036 "config": [] 00:20:44.036 }, 00:20:44.036 { 00:20:44.036 "subsystem": "scheduler", 00:20:44.036 "config": [ 00:20:44.036 { 00:20:44.036 "method": "framework_set_scheduler", 00:20:44.036 "params": { 00:20:44.036 "name": "static" 00:20:44.036 } 00:20:44.036 } 00:20:44.036 ] 00:20:44.036 }, 00:20:44.036 { 00:20:44.036 "subsystem": "nvmf", 00:20:44.036 "config": [ 00:20:44.036 { 00:20:44.036 "method": "nvmf_set_config", 00:20:44.036 "params": { 00:20:44.036 "discovery_filter": "match_any", 00:20:44.036 "admin_cmd_passthru": { 00:20:44.036 "identify_ctrlr": false 00:20:44.036 } 00:20:44.036 } 00:20:44.036 }, 00:20:44.036 { 00:20:44.036 "method": "nvmf_set_max_subsystems", 00:20:44.036 "params": { 00:20:44.036 "max_subsystems": 1024 00:20:44.036 } 00:20:44.036 }, 00:20:44.036 { 00:20:44.036 "method": "nvmf_set_crdt", 00:20:44.036 "params": { 00:20:44.036 "crdt1": 0, 00:20:44.036 "crdt2": 0, 00:20:44.036 "crdt3": 0 00:20:44.036 } 00:20:44.036 }, 00:20:44.036 { 00:20:44.036 "method": "nvmf_create_transport", 00:20:44.036 "params": { 00:20:44.036 "trtype": "TCP", 00:20:44.036 "max_queue_depth": 128, 00:20:44.036 "max_io_qpairs_per_ctrlr": 127, 00:20:44.036 "in_capsule_data_size": 4096, 00:20:44.036 "max_io_size": 131072, 00:20:44.036 "io_unit_size": 131072, 00:20:44.036 "max_aq_depth": 128, 00:20:44.036 "num_shared_buffers": 511, 00:20:44.036 "buf_cache_size": 4294967295, 00:20:44.036 "dif_insert_or_strip": false, 00:20:44.036 "zcopy": false, 00:20:44.036 "c2h_success": false, 00:20:44.036 "sock_priority": 0, 00:20:44.036 "abort_timeout_sec": 1, 00:20:44.036 "ack_timeout": 0, 00:20:44.036 "data_wr_pool_size": 0 00:20:44.036 } 00:20:44.036 }, 00:20:44.036 { 00:20:44.036 "method": "nvmf_create_subsystem", 00:20:44.036 "params": { 00:20:44.036 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.036 "allow_any_host": false, 00:20:44.036 "serial_number": "00000000000000000000", 00:20:44.036 "model_number": "SPDK bdev Controller", 00:20:44.036 "max_namespaces": 32, 00:20:44.036 "min_cntlid": 1, 00:20:44.036 "max_cntlid": 65519, 00:20:44.036 "ana_reporting": false 00:20:44.036 } 00:20:44.036 }, 00:20:44.036 { 00:20:44.036 "method": "nvmf_subsystem_add_host", 00:20:44.036 "params": { 00:20:44.036 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.036 "host": "nqn.2016-06.io.spdk:host1", 00:20:44.036 "psk": "key0" 00:20:44.036 } 00:20:44.036 }, 00:20:44.036 { 00:20:44.036 "method": "nvmf_subsystem_add_ns", 00:20:44.036 "params": { 00:20:44.036 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.036 "namespace": { 00:20:44.036 "nsid": 1, 00:20:44.036 "bdev_name": "malloc0", 00:20:44.036 "nguid": "1A5A30D62C2D41B6A91A3219EABBD341", 00:20:44.036 "uuid": "1a5a30d6-2c2d-41b6-a91a-3219eabbd341", 00:20:44.036 "no_auto_visible": false 00:20:44.036 } 00:20:44.036 } 00:20:44.036 }, 00:20:44.036 { 00:20:44.036 "method": "nvmf_subsystem_add_listener", 00:20:44.036 "params": { 00:20:44.036 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.036 "listen_address": { 00:20:44.036 "trtype": "TCP", 00:20:44.036 "adrfam": "IPv4", 00:20:44.036 "traddr": "10.0.0.2", 00:20:44.036 "trsvcid": "4420" 00:20:44.036 }, 00:20:44.036 "secure_channel": false, 00:20:44.036 "sock_impl": "ssl" 00:20:44.036 } 00:20:44.036 } 00:20:44.036 ] 00:20:44.036 } 00:20:44.036 ] 00:20:44.036 }' 00:20:44.036 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:44.036 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.036 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=931734 00:20:44.036 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:44.036 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 931734 00:20:44.036 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 931734 ']' 00:20:44.036 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.036 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:44.036 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.036 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:44.036 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.036 [2024-07-25 19:11:36.468926] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:44.036 [2024-07-25 19:11:36.469016] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.294 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.294 [2024-07-25 19:11:36.552492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.294 [2024-07-25 19:11:36.663284] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.294 [2024-07-25 19:11:36.663340] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.294 [2024-07-25 19:11:36.663362] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.294 [2024-07-25 19:11:36.663372] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.294 [2024-07-25 19:11:36.663382] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.294 [2024-07-25 19:11:36.663465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.553 [2024-07-25 19:11:36.916175] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.553 [2024-07-25 19:11:36.959899] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:44.553 [2024-07-25 19:11:36.960153] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.119 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:45.119 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:45.119 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:45.119 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:45.119 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.119 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.119 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=931886 00:20:45.119 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 931886 /var/tmp/bdevperf.sock 00:20:45.119 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 931886 ']' 00:20:45.119 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:45.119 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:45.119 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:45.119 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:20:45.119 "subsystems": [ 00:20:45.119 { 00:20:45.119 "subsystem": "keyring", 00:20:45.119 "config": [ 00:20:45.119 { 00:20:45.119 "method": "keyring_file_add_key", 00:20:45.119 "params": { 00:20:45.119 "name": "key0", 00:20:45.119 "path": "/tmp/tmp.iquDBUIap0" 00:20:45.119 } 00:20:45.119 } 00:20:45.119 ] 00:20:45.119 }, 00:20:45.119 { 00:20:45.119 "subsystem": "iobuf", 00:20:45.119 "config": [ 00:20:45.119 { 00:20:45.119 "method": "iobuf_set_options", 00:20:45.119 "params": { 00:20:45.119 "small_pool_count": 8192, 00:20:45.119 "large_pool_count": 1024, 00:20:45.119 "small_bufsize": 8192, 00:20:45.119 "large_bufsize": 135168 00:20:45.119 } 00:20:45.119 } 00:20:45.119 ] 00:20:45.119 }, 00:20:45.119 { 00:20:45.119 "subsystem": "sock", 00:20:45.119 "config": [ 00:20:45.119 { 00:20:45.119 "method": "sock_set_default_impl", 00:20:45.119 "params": { 00:20:45.119 "impl_name": "posix" 00:20:45.119 } 00:20:45.119 }, 00:20:45.119 { 00:20:45.119 "method": "sock_impl_set_options", 00:20:45.119 "params": { 00:20:45.119 "impl_name": "ssl", 00:20:45.119 "recv_buf_size": 4096, 00:20:45.119 "send_buf_size": 4096, 00:20:45.119 "enable_recv_pipe": true, 00:20:45.119 "enable_quickack": false, 00:20:45.119 "enable_placement_id": 0, 00:20:45.119 "enable_zerocopy_send_server": true, 00:20:45.119 "enable_zerocopy_send_client": false, 00:20:45.119 "zerocopy_threshold": 0, 00:20:45.119 "tls_version": 0, 00:20:45.119 "enable_ktls": false 00:20:45.119 } 00:20:45.119 }, 00:20:45.119 { 00:20:45.119 "method": "sock_impl_set_options", 00:20:45.119 "params": { 00:20:45.119 "impl_name": "posix", 00:20:45.119 "recv_buf_size": 2097152, 00:20:45.119 "send_buf_size": 2097152, 00:20:45.119 "enable_recv_pipe": true, 00:20:45.119 "enable_quickack": false, 00:20:45.119 "enable_placement_id": 0, 00:20:45.119 "enable_zerocopy_send_server": true, 00:20:45.119 "enable_zerocopy_send_client": false, 00:20:45.119 "zerocopy_threshold": 0, 00:20:45.119 "tls_version": 0, 00:20:45.119 "enable_ktls": false 00:20:45.119 } 00:20:45.119 } 00:20:45.120 ] 00:20:45.120 }, 00:20:45.120 { 00:20:45.120 "subsystem": "vmd", 00:20:45.120 "config": [] 00:20:45.120 }, 00:20:45.120 { 00:20:45.120 "subsystem": "accel", 00:20:45.120 "config": [ 00:20:45.120 { 00:20:45.120 "method": "accel_set_options", 00:20:45.120 "params": { 00:20:45.120 "small_cache_size": 128, 00:20:45.120 "large_cache_size": 16, 00:20:45.120 "task_count": 2048, 00:20:45.120 "sequence_count": 2048, 00:20:45.120 "buf_count": 2048 00:20:45.120 } 00:20:45.120 } 00:20:45.120 ] 00:20:45.120 }, 00:20:45.120 { 00:20:45.120 "subsystem": "bdev", 00:20:45.120 "config": [ 00:20:45.120 { 00:20:45.120 "method": "bdev_set_options", 00:20:45.120 "params": { 00:20:45.120 "bdev_io_pool_size": 65535, 00:20:45.120 "bdev_io_cache_size": 256, 00:20:45.120 "bdev_auto_examine": true, 00:20:45.120 "iobuf_small_cache_size": 128, 00:20:45.120 "iobuf_large_cache_size": 16 00:20:45.120 } 00:20:45.120 }, 00:20:45.120 { 00:20:45.120 "method": "bdev_raid_set_options", 00:20:45.120 "params": { 00:20:45.120 "process_window_size_kb": 1024, 00:20:45.120 "process_max_bandwidth_mb_sec": 0 00:20:45.120 } 00:20:45.120 }, 00:20:45.120 { 00:20:45.120 "method": "bdev_iscsi_set_options", 00:20:45.120 "params": { 00:20:45.120 "timeout_sec": 30 00:20:45.120 } 00:20:45.120 }, 00:20:45.120 { 00:20:45.120 "method": "bdev_nvme_set_options", 00:20:45.120 "params": { 00:20:45.120 "action_on_timeout": "none", 00:20:45.120 "timeout_us": 0, 00:20:45.120 "timeout_admin_us": 0, 00:20:45.120 "keep_alive_timeout_ms": 10000, 00:20:45.120 "arbitration_burst": 0, 00:20:45.120 "low_priority_weight": 0, 00:20:45.120 "medium_priority_weight": 0, 00:20:45.120 "high_priority_weight": 0, 00:20:45.120 "nvme_adminq_poll_period_us": 10000, 00:20:45.120 "nvme_ioq_poll_period_us": 0, 00:20:45.120 "io_queue_requests": 512, 00:20:45.120 "delay_cmd_submit": true, 00:20:45.120 "transport_retry_count": 4, 00:20:45.120 "bdev_retry_count": 3, 00:20:45.120 "transport_ack_timeout": 0, 00:20:45.120 "ctrlr_loss_timeout_sec": 0, 00:20:45.120 "reconnect_delay_sec": 0, 00:20:45.120 "fast_io_fail_timeout_sec": 0, 00:20:45.120 "disable_auto_failback": false, 00:20:45.120 "generate_uuids": false, 00:20:45.120 "transport_tos": 0, 00:20:45.120 "nvme_error_stat": false, 00:20:45.120 "rdma_srq_size": 0, 00:20:45.120 "io_path_stat": false, 00:20:45.120 "allow_accel_sequence": false, 00:20:45.120 "rdma_max_cq_size": 0, 00:20:45.120 "rdma_cm_event_timeout_ms": 0, 00:20:45.120 "dhchap_digests": [ 00:20:45.120 "sha256", 00:20:45.120 "sha384", 00:20:45.120 "sha512" 00:20:45.120 ], 00:20:45.120 "dhchap_dhgroups": [ 00:20:45.120 "null", 00:20:45.120 "ffdhe2048", 00:20:45.120 "ffdhe3072", 00:20:45.120 "ffdhe4096", 00:20:45.120 "ffdhe6144", 00:20:45.120 "ffdhe8192" 00:20:45.120 ] 00:20:45.120 } 00:20:45.120 }, 00:20:45.120 { 00:20:45.120 "method": "bdev_nvme_attach_controller", 00:20:45.120 "params": { 00:20:45.120 "name": "nvme0", 00:20:45.120 "trtype": "TCP", 00:20:45.120 "adrfam": "IPv4", 00:20:45.120 "traddr": "10.0.0.2", 00:20:45.120 "trsvcid": "4420", 00:20:45.120 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.120 "prchk_reftag": false, 00:20:45.120 "prchk_guard": false, 00:20:45.120 "ctrlr_loss_timeout_sec": 0, 00:20:45.120 "reconnect_delay_sec": 0, 00:20:45.120 "fast_io_fail_timeout_sec": 0, 00:20:45.120 "psk": "key0", 00:20:45.120 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.120 "hdgst": false, 00:20:45.120 "ddgst": false 00:20:45.120 } 00:20:45.120 }, 00:20:45.120 { 00:20:45.120 "method": "bdev_nvme_set_hotplug", 00:20:45.120 "params": { 00:20:45.120 "period_us": 100000, 00:20:45.120 "enable": false 00:20:45.120 } 00:20:45.120 }, 00:20:45.120 { 00:20:45.120 "method": "bdev_enable_histogram", 00:20:45.120 "params": { 00:20:45.120 "name": "nvme0n1", 00:20:45.120 "enable": true 00:20:45.120 } 00:20:45.120 }, 00:20:45.120 { 00:20:45.120 "method": "bdev_wait_for_examine" 00:20:45.120 } 00:20:45.120 ] 00:20:45.120 }, 00:20:45.120 { 00:20:45.120 "subsystem": "nbd", 00:20:45.120 "config": [] 00:20:45.120 } 00:20:45.120 ] 00:20:45.120 }' 00:20:45.120 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:45.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:45.120 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:45.120 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.120 [2024-07-25 19:11:37.489045] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:45.120 [2024-07-25 19:11:37.489145] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid931886 ] 00:20:45.120 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.120 [2024-07-25 19:11:37.556057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.379 [2024-07-25 19:11:37.665995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.637 [2024-07-25 19:11:37.851113] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:46.202 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:46.202 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:46.202 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:46.202 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:20:46.460 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.460 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:46.460 Running I/O for 1 seconds... 00:20:47.833 00:20:47.833 Latency(us) 00:20:47.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.833 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:47.833 Verification LBA range: start 0x0 length 0x2000 00:20:47.833 nvme0n1 : 1.07 1647.65 6.44 0.00 0.00 75621.26 10874.12 100973.99 00:20:47.833 =================================================================================================================== 00:20:47.833 Total : 1647.65 6.44 0.00 0.00 75621.26 10874.12 100973.99 00:20:47.833 0 00:20:47.833 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:20:47.833 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:20:47.833 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:47.833 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:20:47.833 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:20:47.833 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:47.833 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:47.833 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:47.833 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:47.833 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:47.833 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:47.833 nvmf_trace.0 00:20:47.833 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:20:47.833 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 931886 00:20:47.833 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 931886 ']' 00:20:47.833 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 931886 00:20:47.833 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:47.833 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:47.833 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 931886 00:20:47.833 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:47.833 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:47.833 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 931886' 00:20:47.833 killing process with pid 931886 00:20:47.833 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 931886 00:20:47.833 Received shutdown signal, test time was about 1.000000 seconds 00:20:47.833 00:20:47.833 Latency(us) 00:20:47.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.834 =================================================================================================================== 00:20:47.834 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:47.834 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 931886 00:20:47.834 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:47.834 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:47.834 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:47.834 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:47.834 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:47.834 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:47.834 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:47.834 rmmod nvme_tcp 00:20:47.834 rmmod nvme_fabrics 00:20:48.092 rmmod nvme_keyring 00:20:48.092 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:48.092 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:48.092 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:48.092 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 931734 ']' 00:20:48.092 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 931734 00:20:48.092 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 931734 ']' 00:20:48.092 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 931734 00:20:48.092 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:48.092 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:48.092 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 931734 00:20:48.092 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:48.092 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:48.092 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 931734' 00:20:48.092 killing process with pid 931734 00:20:48.092 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 931734 00:20:48.092 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 931734 00:20:48.352 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:48.352 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:48.352 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:48.352 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:48.352 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:48.352 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.352 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:48.352 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.256 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:50.256 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.NSXkD1BzuU /tmp/tmp.dwGgM6HznL /tmp/tmp.iquDBUIap0 00:20:50.256 00:20:50.256 real 1m25.377s 00:20:50.256 user 2m6.453s 00:20:50.256 sys 0m30.336s 00:20:50.256 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:50.256 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.256 ************************************ 00:20:50.256 END TEST nvmf_tls 00:20:50.256 ************************************ 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:50.516 ************************************ 00:20:50.516 START TEST nvmf_fips 00:20:50.516 ************************************ 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:50.516 * Looking for test storage... 00:20:50.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:50.516 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:50.517 Error setting digest 00:20:50.517 0042299CB67F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:50.517 0042299CB67F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:20:50.517 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:53.063 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:53.063 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:53.063 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:53.064 Found net devices under 0000:09:00.0: cvl_0_0 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:53.064 Found net devices under 0000:09:00.1: cvl_0_1 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:53.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:53.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:20:53.064 00:20:53.064 --- 10.0.0.2 ping statistics --- 00:20:53.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.064 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:53.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:53.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:20:53.064 00:20:53.064 --- 10.0.0.1 ping statistics --- 00:20:53.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.064 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=934538 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 934538 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 934538 ']' 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:53.064 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:53.322 [2024-07-25 19:11:45.543901] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:53.322 [2024-07-25 19:11:45.543980] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.322 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.322 [2024-07-25 19:11:45.620178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.322 [2024-07-25 19:11:45.734550] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.322 [2024-07-25 19:11:45.734599] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.322 [2024-07-25 19:11:45.734614] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.322 [2024-07-25 19:11:45.734640] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.322 [2024-07-25 19:11:45.734649] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.322 [2024-07-25 19:11:45.734689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.254 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:54.254 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:54.254 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:54.254 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:54.254 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:54.254 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.254 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:54.254 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:54.254 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:54.254 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:54.254 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:54.254 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:54.254 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:54.254 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:54.512 [2024-07-25 19:11:46.748803] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.512 [2024-07-25 19:11:46.764805] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:54.512 [2024-07-25 19:11:46.765026] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.512 [2024-07-25 19:11:46.797774] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:54.512 malloc0 00:20:54.512 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:54.512 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=934694 00:20:54.512 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:54.512 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 934694 /var/tmp/bdevperf.sock 00:20:54.512 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 934694 ']' 00:20:54.512 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:54.512 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:54.512 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:54.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:54.512 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:54.512 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:54.512 [2024-07-25 19:11:46.891593] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:54.512 [2024-07-25 19:11:46.891689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid934694 ] 00:20:54.512 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.512 [2024-07-25 19:11:46.959813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.770 [2024-07-25 19:11:47.070018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.703 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:55.703 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:55.703 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:55.703 [2024-07-25 19:11:48.150436] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:55.703 [2024-07-25 19:11:48.150575] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:55.960 TLSTESTn1 00:20:55.960 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:55.960 Running I/O for 10 seconds... 00:21:08.150 00:21:08.150 Latency(us) 00:21:08.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.150 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:08.150 Verification LBA range: start 0x0 length 0x2000 00:21:08.150 TLSTESTn1 : 10.06 1805.70 7.05 0.00 0.00 70694.50 7524.50 104080.88 00:21:08.150 =================================================================================================================== 00:21:08.150 Total : 1805.70 7.05 0.00 0.00 70694.50 7524.50 104080.88 00:21:08.150 0 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:08.150 nvmf_trace.0 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 934694 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 934694 ']' 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 934694 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 934694 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 934694' 00:21:08.150 killing process with pid 934694 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 934694 00:21:08.150 Received shutdown signal, test time was about 10.000000 seconds 00:21:08.150 00:21:08.150 Latency(us) 00:21:08.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.150 =================================================================================================================== 00:21:08.150 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:08.150 [2024-07-25 19:11:58.555231] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 934694 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:08.150 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:08.150 rmmod nvme_tcp 00:21:08.150 rmmod nvme_fabrics 00:21:08.150 rmmod nvme_keyring 00:21:08.151 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:08.151 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:08.151 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:08.151 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 934538 ']' 00:21:08.151 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 934538 00:21:08.151 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 934538 ']' 00:21:08.151 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 934538 00:21:08.151 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:21:08.151 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:08.151 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 934538 00:21:08.151 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:08.151 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:08.151 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 934538' 00:21:08.151 killing process with pid 934538 00:21:08.151 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 934538 00:21:08.151 [2024-07-25 19:11:58.924792] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:08.151 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 934538 00:21:08.151 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:08.151 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:08.151 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:08.151 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:08.151 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:08.151 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.151 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.151 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.132 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:09.132 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:09.132 00:21:09.132 real 0m18.527s 00:21:09.132 user 0m23.334s 00:21:09.132 sys 0m6.917s 00:21:09.132 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:09.132 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:09.132 ************************************ 00:21:09.132 END TEST nvmf_fips 00:21:09.132 ************************************ 00:21:09.132 19:12:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:21:09.132 19:12:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:21:09.132 19:12:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:21:09.132 19:12:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:21:09.132 19:12:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:21:09.132 19:12:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:11.658 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:11.658 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:11.658 Found net devices under 0000:09:00.0: cvl_0_0 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:11.658 Found net devices under 0000:09:00.1: cvl_0_1 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:11.658 ************************************ 00:21:11.658 START TEST nvmf_perf_adq 00:21:11.658 ************************************ 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:11.658 * Looking for test storage... 00:21:11.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:11.658 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:11.659 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:11.659 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:11.659 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.659 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.659 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.659 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:11.659 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.659 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:21:11.659 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:11.659 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:11.659 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:11.659 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:11.659 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:11.659 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:11.659 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:11.659 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:11.659 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:11.659 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:11.659 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:14.187 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:14.187 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:14.187 Found net devices under 0000:09:00.0: cvl_0_0 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:14.187 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.188 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:14.188 Found net devices under 0000:09:00.1: cvl_0_1 00:21:14.188 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.188 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:14.188 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:14.188 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:14.188 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:14.188 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:21:14.188 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:14.754 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:16.654 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:21.926 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:21.927 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:21.927 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:21.927 Found net devices under 0000:09:00.0: cvl_0_0 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:21.927 Found net devices under 0000:09:00.1: cvl_0_1 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:21.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:21.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:21:21.927 00:21:21.927 --- 10.0.0.2 ping statistics --- 00:21:21.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.927 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:21.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:21.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:21:21.927 00:21:21.927 --- 10.0.0.1 ping statistics --- 00:21:21.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.927 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=941900 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 941900 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 941900 ']' 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.927 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:21.928 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.928 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:21.928 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:21.928 [2024-07-25 19:12:14.265410] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:21.928 [2024-07-25 19:12:14.265516] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.928 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.928 [2024-07-25 19:12:14.348650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:22.186 [2024-07-25 19:12:14.468557] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.186 [2024-07-25 19:12:14.468620] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.186 [2024-07-25 19:12:14.468636] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:22.186 [2024-07-25 19:12:14.468649] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:22.186 [2024-07-25 19:12:14.468660] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.186 [2024-07-25 19:12:14.468746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.186 [2024-07-25 19:12:14.468811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.186 [2024-07-25 19:12:14.468904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:22.186 [2024-07-25 19:12:14.468907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.119 [2024-07-25 19:12:15.413030] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.119 Malloc1 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.119 [2024-07-25 19:12:15.465300] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=942172 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:23.119 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:21:23.119 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.017 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:21:25.017 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.017 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.275 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.275 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:21:25.275 "tick_rate": 2700000000, 00:21:25.275 "poll_groups": [ 00:21:25.275 { 00:21:25.275 "name": "nvmf_tgt_poll_group_000", 00:21:25.275 "admin_qpairs": 1, 00:21:25.275 "io_qpairs": 1, 00:21:25.275 "current_admin_qpairs": 1, 00:21:25.276 "current_io_qpairs": 1, 00:21:25.276 "pending_bdev_io": 0, 00:21:25.276 "completed_nvme_io": 19861, 00:21:25.276 "transports": [ 00:21:25.276 { 00:21:25.276 "trtype": "TCP" 00:21:25.276 } 00:21:25.276 ] 00:21:25.276 }, 00:21:25.276 { 00:21:25.276 "name": "nvmf_tgt_poll_group_001", 00:21:25.276 "admin_qpairs": 0, 00:21:25.276 "io_qpairs": 1, 00:21:25.276 "current_admin_qpairs": 0, 00:21:25.276 "current_io_qpairs": 1, 00:21:25.276 "pending_bdev_io": 0, 00:21:25.276 "completed_nvme_io": 20195, 00:21:25.276 "transports": [ 00:21:25.276 { 00:21:25.276 "trtype": "TCP" 00:21:25.276 } 00:21:25.276 ] 00:21:25.276 }, 00:21:25.276 { 00:21:25.276 "name": "nvmf_tgt_poll_group_002", 00:21:25.276 "admin_qpairs": 0, 00:21:25.276 "io_qpairs": 1, 00:21:25.276 "current_admin_qpairs": 0, 00:21:25.276 "current_io_qpairs": 1, 00:21:25.276 "pending_bdev_io": 0, 00:21:25.276 "completed_nvme_io": 20382, 00:21:25.276 "transports": [ 00:21:25.276 { 00:21:25.276 "trtype": "TCP" 00:21:25.276 } 00:21:25.276 ] 00:21:25.276 }, 00:21:25.276 { 00:21:25.276 "name": "nvmf_tgt_poll_group_003", 00:21:25.276 "admin_qpairs": 0, 00:21:25.276 "io_qpairs": 1, 00:21:25.276 "current_admin_qpairs": 0, 00:21:25.276 "current_io_qpairs": 1, 00:21:25.276 "pending_bdev_io": 0, 00:21:25.276 "completed_nvme_io": 19090, 00:21:25.276 "transports": [ 00:21:25.276 { 00:21:25.276 "trtype": "TCP" 00:21:25.276 } 00:21:25.276 ] 00:21:25.276 } 00:21:25.276 ] 00:21:25.276 }' 00:21:25.276 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:25.276 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:21:25.276 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:21:25.276 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:21:25.276 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 942172 00:21:33.382 Initializing NVMe Controllers 00:21:33.382 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:33.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:33.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:33.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:33.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:33.382 Initialization complete. Launching workers. 00:21:33.382 ======================================================== 00:21:33.382 Latency(us) 00:21:33.382 Device Information : IOPS MiB/s Average min max 00:21:33.382 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9977.10 38.97 6415.93 3210.21 8871.78 00:21:33.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10560.40 41.25 6060.33 2277.14 8185.86 00:21:33.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10750.70 41.99 5954.03 2331.26 9607.18 00:21:33.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10350.00 40.43 6184.04 2164.21 10180.56 00:21:33.383 ======================================================== 00:21:33.383 Total : 41638.20 162.65 6148.84 2164.21 10180.56 00:21:33.383 00:21:33.383 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:21:33.383 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:33.383 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:33.383 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:33.383 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:33.383 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:33.383 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:33.383 rmmod nvme_tcp 00:21:33.383 rmmod nvme_fabrics 00:21:33.383 rmmod nvme_keyring 00:21:33.383 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:33.383 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:33.383 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:33.383 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 941900 ']' 00:21:33.383 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 941900 00:21:33.383 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 941900 ']' 00:21:33.383 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 941900 00:21:33.383 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:33.383 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:33.383 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 941900 00:21:33.383 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:33.383 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:33.383 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 941900' 00:21:33.383 killing process with pid 941900 00:21:33.383 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 941900 00:21:33.383 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 941900 00:21:33.641 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:33.641 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:33.641 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:33.642 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:33.642 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:33.642 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.642 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:33.642 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.577 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:35.577 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:21:35.577 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:36.512 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:38.469 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:43.741 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:43.742 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:43.742 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:43.742 Found net devices under 0000:09:00.0: cvl_0_0 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:43.742 Found net devices under 0000:09:00.1: cvl_0_1 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:43.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:43.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:21:43.742 00:21:43.742 --- 10.0.0.2 ping statistics --- 00:21:43.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.742 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:43.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:43.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:21:43.742 00:21:43.742 --- 10.0.0.1 ping statistics --- 00:21:43.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.742 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:43.742 net.core.busy_poll = 1 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:43.742 net.core.busy_read = 1 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=944787 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 944787 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 944787 ']' 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:43.742 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.742 [2024-07-25 19:12:36.024788] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:43.742 [2024-07-25 19:12:36.024868] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.742 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.742 [2024-07-25 19:12:36.101342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:43.999 [2024-07-25 19:12:36.215291] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.999 [2024-07-25 19:12:36.215337] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.999 [2024-07-25 19:12:36.215361] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:43.999 [2024-07-25 19:12:36.215373] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:43.999 [2024-07-25 19:12:36.215382] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.999 [2024-07-25 19:12:36.215439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.999 [2024-07-25 19:12:36.215572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.999 [2024-07-25 19:12:36.215628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:43.999 [2024-07-25 19:12:36.215631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.563 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:44.563 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:44.563 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:44.563 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:44.563 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.563 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.563 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:21:44.563 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:44.563 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:44.563 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.563 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.563 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.563 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:44.563 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:44.563 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.563 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.563 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.563 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:44.563 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.563 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.821 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.821 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:44.821 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.821 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.821 [2024-07-25 19:12:37.146411] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.821 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.821 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:44.821 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.822 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.822 Malloc1 00:21:44.822 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.822 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:44.822 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.822 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.822 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.822 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:44.822 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.822 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.822 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.822 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:44.822 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.822 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.822 [2024-07-25 19:12:37.200375] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.822 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.822 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=944945 00:21:44.822 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:21:44.822 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:44.822 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.749 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:21:46.749 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.749 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.006 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.006 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:21:47.006 "tick_rate": 2700000000, 00:21:47.006 "poll_groups": [ 00:21:47.006 { 00:21:47.006 "name": "nvmf_tgt_poll_group_000", 00:21:47.006 "admin_qpairs": 1, 00:21:47.006 "io_qpairs": 0, 00:21:47.006 "current_admin_qpairs": 1, 00:21:47.006 "current_io_qpairs": 0, 00:21:47.006 "pending_bdev_io": 0, 00:21:47.006 "completed_nvme_io": 0, 00:21:47.006 "transports": [ 00:21:47.006 { 00:21:47.006 "trtype": "TCP" 00:21:47.006 } 00:21:47.006 ] 00:21:47.006 }, 00:21:47.006 { 00:21:47.006 "name": "nvmf_tgt_poll_group_001", 00:21:47.006 "admin_qpairs": 0, 00:21:47.006 "io_qpairs": 4, 00:21:47.006 "current_admin_qpairs": 0, 00:21:47.006 "current_io_qpairs": 4, 00:21:47.006 "pending_bdev_io": 0, 00:21:47.006 "completed_nvme_io": 30183, 00:21:47.006 "transports": [ 00:21:47.006 { 00:21:47.006 "trtype": "TCP" 00:21:47.006 } 00:21:47.006 ] 00:21:47.006 }, 00:21:47.006 { 00:21:47.006 "name": "nvmf_tgt_poll_group_002", 00:21:47.006 "admin_qpairs": 0, 00:21:47.006 "io_qpairs": 0, 00:21:47.006 "current_admin_qpairs": 0, 00:21:47.006 "current_io_qpairs": 0, 00:21:47.006 "pending_bdev_io": 0, 00:21:47.006 "completed_nvme_io": 0, 00:21:47.006 "transports": [ 00:21:47.006 { 00:21:47.006 "trtype": "TCP" 00:21:47.006 } 00:21:47.006 ] 00:21:47.006 }, 00:21:47.006 { 00:21:47.006 "name": "nvmf_tgt_poll_group_003", 00:21:47.006 "admin_qpairs": 0, 00:21:47.006 "io_qpairs": 0, 00:21:47.006 "current_admin_qpairs": 0, 00:21:47.006 "current_io_qpairs": 0, 00:21:47.006 "pending_bdev_io": 0, 00:21:47.006 "completed_nvme_io": 0, 00:21:47.006 "transports": [ 00:21:47.006 { 00:21:47.006 "trtype": "TCP" 00:21:47.006 } 00:21:47.006 ] 00:21:47.006 } 00:21:47.006 ] 00:21:47.006 }' 00:21:47.006 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:47.006 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:21:47.006 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=3 00:21:47.006 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 3 -lt 2 ]] 00:21:47.006 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 944945 00:21:55.110 Initializing NVMe Controllers 00:21:55.110 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:55.110 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:55.110 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:55.110 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:55.110 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:55.110 Initialization complete. Launching workers. 00:21:55.110 ======================================================== 00:21:55.110 Latency(us) 00:21:55.110 Device Information : IOPS MiB/s Average min max 00:21:55.110 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 3943.10 15.40 16236.15 2405.09 63288.26 00:21:55.110 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4094.40 15.99 15690.96 2685.77 62031.60 00:21:55.110 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4053.70 15.83 15802.26 2112.29 65004.61 00:21:55.110 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 3765.90 14.71 17043.26 2731.75 67604.88 00:21:55.110 ======================================================== 00:21:55.110 Total : 15857.10 61.94 16176.14 2112.29 67604.88 00:21:55.110 00:21:55.110 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:21:55.110 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:55.110 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:55.110 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:55.110 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:55.110 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:55.110 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:55.110 rmmod nvme_tcp 00:21:55.110 rmmod nvme_fabrics 00:21:55.110 rmmod nvme_keyring 00:21:55.110 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:55.110 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:55.110 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:55.110 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 944787 ']' 00:21:55.110 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 944787 00:21:55.110 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 944787 ']' 00:21:55.110 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 944787 00:21:55.110 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:55.110 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:55.110 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 944787 00:21:55.110 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:55.110 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:55.110 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 944787' 00:21:55.110 killing process with pid 944787 00:21:55.110 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 944787 00:21:55.110 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 944787 00:21:55.368 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:55.369 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:55.369 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:55.369 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:55.369 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:55.369 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.369 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.369 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:57.900 00:21:57.900 real 0m46.055s 00:21:57.900 user 2m43.905s 00:21:57.900 sys 0m11.343s 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:57.900 ************************************ 00:21:57.900 END TEST nvmf_perf_adq 00:21:57.900 ************************************ 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:57.900 ************************************ 00:21:57.900 START TEST nvmf_shutdown 00:21:57.900 ************************************ 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:57.900 * Looking for test storage... 00:21:57.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.900 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:57.901 ************************************ 00:21:57.901 START TEST nvmf_shutdown_tc1 00:21:57.901 ************************************ 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:57.901 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.457 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:00.458 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:00.458 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:00.458 Found net devices under 0000:09:00.0: cvl_0_0 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:00.458 Found net devices under 0000:09:00.1: cvl_0_1 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:00.458 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:00.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:00.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:22:00.458 00:22:00.458 --- 10.0.0.2 ping statistics --- 00:22:00.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.459 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:22:00.459 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:00.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:00.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:22:00.459 00:22:00.459 --- 10.0.0.1 ping statistics --- 00:22:00.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.459 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:22:00.459 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:00.459 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:22:00.459 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:00.459 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:00.459 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:00.459 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:00.459 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:00.459 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:00.459 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:00.459 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:00.459 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:00.459 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:00.459 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.459 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=948401 00:22:00.459 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:00.459 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 948401 00:22:00.459 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 948401 ']' 00:22:00.459 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.459 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:00.459 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.459 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:00.459 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.459 [2024-07-25 19:12:52.604187] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:00.459 [2024-07-25 19:12:52.604267] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.459 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.459 [2024-07-25 19:12:52.682003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:00.459 [2024-07-25 19:12:52.794332] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.459 [2024-07-25 19:12:52.794387] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.459 [2024-07-25 19:12:52.794401] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.459 [2024-07-25 19:12:52.794427] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.459 [2024-07-25 19:12:52.794437] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.459 [2024-07-25 19:12:52.794522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.459 [2024-07-25 19:12:52.794584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:00.459 [2024-07-25 19:12:52.794650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:00.459 [2024-07-25 19:12:52.794653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:01.393 [2024-07-25 19:12:53.596705] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.393 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:01.393 Malloc1 00:22:01.393 [2024-07-25 19:12:53.672213] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.393 Malloc2 00:22:01.393 Malloc3 00:22:01.393 Malloc4 00:22:01.393 Malloc5 00:22:01.651 Malloc6 00:22:01.651 Malloc7 00:22:01.651 Malloc8 00:22:01.651 Malloc9 00:22:01.651 Malloc10 00:22:01.651 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.651 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:01.651 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:01.651 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=948654 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 948654 /var/tmp/bdevperf.sock 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 948654 ']' 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:01.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:01.910 { 00:22:01.910 "params": { 00:22:01.910 "name": "Nvme$subsystem", 00:22:01.910 "trtype": "$TEST_TRANSPORT", 00:22:01.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.910 "adrfam": "ipv4", 00:22:01.910 "trsvcid": "$NVMF_PORT", 00:22:01.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.910 "hdgst": ${hdgst:-false}, 00:22:01.910 "ddgst": ${ddgst:-false} 00:22:01.910 }, 00:22:01.910 "method": "bdev_nvme_attach_controller" 00:22:01.910 } 00:22:01.910 EOF 00:22:01.910 )") 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:01.910 { 00:22:01.910 "params": { 00:22:01.910 "name": "Nvme$subsystem", 00:22:01.910 "trtype": "$TEST_TRANSPORT", 00:22:01.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.910 "adrfam": "ipv4", 00:22:01.910 "trsvcid": "$NVMF_PORT", 00:22:01.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.910 "hdgst": ${hdgst:-false}, 00:22:01.910 "ddgst": ${ddgst:-false} 00:22:01.910 }, 00:22:01.910 "method": "bdev_nvme_attach_controller" 00:22:01.910 } 00:22:01.910 EOF 00:22:01.910 )") 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:01.910 { 00:22:01.910 "params": { 00:22:01.910 "name": "Nvme$subsystem", 00:22:01.910 "trtype": "$TEST_TRANSPORT", 00:22:01.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.910 "adrfam": "ipv4", 00:22:01.910 "trsvcid": "$NVMF_PORT", 00:22:01.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.910 "hdgst": ${hdgst:-false}, 00:22:01.910 "ddgst": ${ddgst:-false} 00:22:01.910 }, 00:22:01.910 "method": "bdev_nvme_attach_controller" 00:22:01.910 } 00:22:01.910 EOF 00:22:01.910 )") 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:01.910 { 00:22:01.910 "params": { 00:22:01.910 "name": "Nvme$subsystem", 00:22:01.910 "trtype": "$TEST_TRANSPORT", 00:22:01.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.910 "adrfam": "ipv4", 00:22:01.910 "trsvcid": "$NVMF_PORT", 00:22:01.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.910 "hdgst": ${hdgst:-false}, 00:22:01.910 "ddgst": ${ddgst:-false} 00:22:01.910 }, 00:22:01.910 "method": "bdev_nvme_attach_controller" 00:22:01.910 } 00:22:01.910 EOF 00:22:01.910 )") 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:01.910 { 00:22:01.910 "params": { 00:22:01.910 "name": "Nvme$subsystem", 00:22:01.910 "trtype": "$TEST_TRANSPORT", 00:22:01.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.910 "adrfam": "ipv4", 00:22:01.910 "trsvcid": "$NVMF_PORT", 00:22:01.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.910 "hdgst": ${hdgst:-false}, 00:22:01.910 "ddgst": ${ddgst:-false} 00:22:01.910 }, 00:22:01.910 "method": "bdev_nvme_attach_controller" 00:22:01.910 } 00:22:01.910 EOF 00:22:01.910 )") 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:01.910 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:01.910 { 00:22:01.910 "params": { 00:22:01.910 "name": "Nvme$subsystem", 00:22:01.910 "trtype": "$TEST_TRANSPORT", 00:22:01.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.910 "adrfam": "ipv4", 00:22:01.910 "trsvcid": "$NVMF_PORT", 00:22:01.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.910 "hdgst": ${hdgst:-false}, 00:22:01.910 "ddgst": ${ddgst:-false} 00:22:01.910 }, 00:22:01.910 "method": "bdev_nvme_attach_controller" 00:22:01.910 } 00:22:01.910 EOF 00:22:01.910 )") 00:22:01.911 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:01.911 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:01.911 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:01.911 { 00:22:01.911 "params": { 00:22:01.911 "name": "Nvme$subsystem", 00:22:01.911 "trtype": "$TEST_TRANSPORT", 00:22:01.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.911 "adrfam": "ipv4", 00:22:01.911 "trsvcid": "$NVMF_PORT", 00:22:01.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.911 "hdgst": ${hdgst:-false}, 00:22:01.911 "ddgst": ${ddgst:-false} 00:22:01.911 }, 00:22:01.911 "method": "bdev_nvme_attach_controller" 00:22:01.911 } 00:22:01.911 EOF 00:22:01.911 )") 00:22:01.911 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:01.911 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:01.911 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:01.911 { 00:22:01.911 "params": { 00:22:01.911 "name": "Nvme$subsystem", 00:22:01.911 "trtype": "$TEST_TRANSPORT", 00:22:01.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.911 "adrfam": "ipv4", 00:22:01.911 "trsvcid": "$NVMF_PORT", 00:22:01.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.911 "hdgst": ${hdgst:-false}, 00:22:01.911 "ddgst": ${ddgst:-false} 00:22:01.911 }, 00:22:01.911 "method": "bdev_nvme_attach_controller" 00:22:01.911 } 00:22:01.911 EOF 00:22:01.911 )") 00:22:01.911 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:01.911 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:01.911 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:01.911 { 00:22:01.911 "params": { 00:22:01.911 "name": "Nvme$subsystem", 00:22:01.911 "trtype": "$TEST_TRANSPORT", 00:22:01.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.911 "adrfam": "ipv4", 00:22:01.911 "trsvcid": "$NVMF_PORT", 00:22:01.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.911 "hdgst": ${hdgst:-false}, 00:22:01.911 "ddgst": ${ddgst:-false} 00:22:01.911 }, 00:22:01.911 "method": "bdev_nvme_attach_controller" 00:22:01.911 } 00:22:01.911 EOF 00:22:01.911 )") 00:22:01.911 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:01.911 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:01.911 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:01.911 { 00:22:01.911 "params": { 00:22:01.911 "name": "Nvme$subsystem", 00:22:01.911 "trtype": "$TEST_TRANSPORT", 00:22:01.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.911 "adrfam": "ipv4", 00:22:01.911 "trsvcid": "$NVMF_PORT", 00:22:01.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.911 "hdgst": ${hdgst:-false}, 00:22:01.911 "ddgst": ${ddgst:-false} 00:22:01.911 }, 00:22:01.911 "method": "bdev_nvme_attach_controller" 00:22:01.911 } 00:22:01.911 EOF 00:22:01.911 )") 00:22:01.911 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:01.911 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:01.911 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:01.911 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:01.911 "params": { 00:22:01.911 "name": "Nvme1", 00:22:01.911 "trtype": "tcp", 00:22:01.911 "traddr": "10.0.0.2", 00:22:01.911 "adrfam": "ipv4", 00:22:01.911 "trsvcid": "4420", 00:22:01.911 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.911 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:01.911 "hdgst": false, 00:22:01.911 "ddgst": false 00:22:01.911 }, 00:22:01.911 "method": "bdev_nvme_attach_controller" 00:22:01.911 },{ 00:22:01.911 "params": { 00:22:01.911 "name": "Nvme2", 00:22:01.911 "trtype": "tcp", 00:22:01.911 "traddr": "10.0.0.2", 00:22:01.911 "adrfam": "ipv4", 00:22:01.911 "trsvcid": "4420", 00:22:01.911 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:01.911 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:01.911 "hdgst": false, 00:22:01.911 "ddgst": false 00:22:01.911 }, 00:22:01.911 "method": "bdev_nvme_attach_controller" 00:22:01.911 },{ 00:22:01.911 "params": { 00:22:01.911 "name": "Nvme3", 00:22:01.911 "trtype": "tcp", 00:22:01.911 "traddr": "10.0.0.2", 00:22:01.911 "adrfam": "ipv4", 00:22:01.911 "trsvcid": "4420", 00:22:01.911 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:01.911 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:01.911 "hdgst": false, 00:22:01.911 "ddgst": false 00:22:01.911 }, 00:22:01.911 "method": "bdev_nvme_attach_controller" 00:22:01.911 },{ 00:22:01.911 "params": { 00:22:01.911 "name": "Nvme4", 00:22:01.911 "trtype": "tcp", 00:22:01.911 "traddr": "10.0.0.2", 00:22:01.911 "adrfam": "ipv4", 00:22:01.911 "trsvcid": "4420", 00:22:01.911 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:01.911 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:01.911 "hdgst": false, 00:22:01.911 "ddgst": false 00:22:01.911 }, 00:22:01.911 "method": "bdev_nvme_attach_controller" 00:22:01.911 },{ 00:22:01.911 "params": { 00:22:01.911 "name": "Nvme5", 00:22:01.911 "trtype": "tcp", 00:22:01.911 "traddr": "10.0.0.2", 00:22:01.911 "adrfam": "ipv4", 00:22:01.911 "trsvcid": "4420", 00:22:01.911 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:01.911 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:01.911 "hdgst": false, 00:22:01.911 "ddgst": false 00:22:01.911 }, 00:22:01.911 "method": "bdev_nvme_attach_controller" 00:22:01.911 },{ 00:22:01.911 "params": { 00:22:01.911 "name": "Nvme6", 00:22:01.911 "trtype": "tcp", 00:22:01.911 "traddr": "10.0.0.2", 00:22:01.911 "adrfam": "ipv4", 00:22:01.911 "trsvcid": "4420", 00:22:01.911 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:01.911 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:01.911 "hdgst": false, 00:22:01.911 "ddgst": false 00:22:01.911 }, 00:22:01.911 "method": "bdev_nvme_attach_controller" 00:22:01.911 },{ 00:22:01.911 "params": { 00:22:01.911 "name": "Nvme7", 00:22:01.911 "trtype": "tcp", 00:22:01.911 "traddr": "10.0.0.2", 00:22:01.911 "adrfam": "ipv4", 00:22:01.911 "trsvcid": "4420", 00:22:01.911 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:01.911 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:01.911 "hdgst": false, 00:22:01.911 "ddgst": false 00:22:01.911 }, 00:22:01.911 "method": "bdev_nvme_attach_controller" 00:22:01.911 },{ 00:22:01.911 "params": { 00:22:01.911 "name": "Nvme8", 00:22:01.911 "trtype": "tcp", 00:22:01.911 "traddr": "10.0.0.2", 00:22:01.911 "adrfam": "ipv4", 00:22:01.911 "trsvcid": "4420", 00:22:01.911 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:01.911 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:01.911 "hdgst": false, 00:22:01.911 "ddgst": false 00:22:01.911 }, 00:22:01.911 "method": "bdev_nvme_attach_controller" 00:22:01.911 },{ 00:22:01.911 "params": { 00:22:01.911 "name": "Nvme9", 00:22:01.911 "trtype": "tcp", 00:22:01.911 "traddr": "10.0.0.2", 00:22:01.911 "adrfam": "ipv4", 00:22:01.911 "trsvcid": "4420", 00:22:01.911 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:01.911 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:01.911 "hdgst": false, 00:22:01.911 "ddgst": false 00:22:01.911 }, 00:22:01.911 "method": "bdev_nvme_attach_controller" 00:22:01.911 },{ 00:22:01.911 "params": { 00:22:01.911 "name": "Nvme10", 00:22:01.911 "trtype": "tcp", 00:22:01.911 "traddr": "10.0.0.2", 00:22:01.911 "adrfam": "ipv4", 00:22:01.911 "trsvcid": "4420", 00:22:01.911 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:01.911 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:01.911 "hdgst": false, 00:22:01.911 "ddgst": false 00:22:01.911 }, 00:22:01.911 "method": "bdev_nvme_attach_controller" 00:22:01.911 }' 00:22:01.911 [2024-07-25 19:12:54.178546] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:01.911 [2024-07-25 19:12:54.178655] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:01.911 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.911 [2024-07-25 19:12:54.253454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.911 [2024-07-25 19:12:54.365004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.811 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:03.811 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:03.811 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:03.811 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.811 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:03.811 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.811 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 948654 00:22:03.811 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:03.811 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:22:04.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 948654 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:04.744 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 948401 00:22:04.744 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:04.744 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:04.744 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:04.744 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:04.744 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.744 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.744 { 00:22:04.744 "params": { 00:22:04.744 "name": "Nvme$subsystem", 00:22:04.744 "trtype": "$TEST_TRANSPORT", 00:22:04.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.744 "adrfam": "ipv4", 00:22:04.744 "trsvcid": "$NVMF_PORT", 00:22:04.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.744 "hdgst": ${hdgst:-false}, 00:22:04.744 "ddgst": ${ddgst:-false} 00:22:04.744 }, 00:22:04.744 "method": "bdev_nvme_attach_controller" 00:22:04.744 } 00:22:04.744 EOF 00:22:04.744 )") 00:22:04.744 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:04.744 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.744 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.744 { 00:22:04.744 "params": { 00:22:04.744 "name": "Nvme$subsystem", 00:22:04.744 "trtype": "$TEST_TRANSPORT", 00:22:04.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.744 "adrfam": "ipv4", 00:22:04.744 "trsvcid": "$NVMF_PORT", 00:22:04.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.744 "hdgst": ${hdgst:-false}, 00:22:04.744 "ddgst": ${ddgst:-false} 00:22:04.744 }, 00:22:04.744 "method": "bdev_nvme_attach_controller" 00:22:04.744 } 00:22:04.744 EOF 00:22:04.744 )") 00:22:04.744 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:04.744 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.744 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.744 { 00:22:04.744 "params": { 00:22:04.744 "name": "Nvme$subsystem", 00:22:04.744 "trtype": "$TEST_TRANSPORT", 00:22:04.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.744 "adrfam": "ipv4", 00:22:04.744 "trsvcid": "$NVMF_PORT", 00:22:04.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.744 "hdgst": ${hdgst:-false}, 00:22:04.744 "ddgst": ${ddgst:-false} 00:22:04.744 }, 00:22:04.744 "method": "bdev_nvme_attach_controller" 00:22:04.744 } 00:22:04.744 EOF 00:22:04.744 )") 00:22:04.744 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:04.744 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.745 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.745 { 00:22:04.745 "params": { 00:22:04.745 "name": "Nvme$subsystem", 00:22:04.745 "trtype": "$TEST_TRANSPORT", 00:22:04.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.745 "adrfam": "ipv4", 00:22:04.745 "trsvcid": "$NVMF_PORT", 00:22:04.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.745 "hdgst": ${hdgst:-false}, 00:22:04.745 "ddgst": ${ddgst:-false} 00:22:04.745 }, 00:22:04.745 "method": "bdev_nvme_attach_controller" 00:22:04.745 } 00:22:04.745 EOF 00:22:04.745 )") 00:22:04.745 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:04.745 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.745 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.745 { 00:22:04.745 "params": { 00:22:04.745 "name": "Nvme$subsystem", 00:22:04.745 "trtype": "$TEST_TRANSPORT", 00:22:04.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.745 "adrfam": "ipv4", 00:22:04.745 "trsvcid": "$NVMF_PORT", 00:22:04.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.745 "hdgst": ${hdgst:-false}, 00:22:04.745 "ddgst": ${ddgst:-false} 00:22:04.745 }, 00:22:04.745 "method": "bdev_nvme_attach_controller" 00:22:04.745 } 00:22:04.745 EOF 00:22:04.745 )") 00:22:04.745 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:04.745 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.745 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.745 { 00:22:04.745 "params": { 00:22:04.745 "name": "Nvme$subsystem", 00:22:04.745 "trtype": "$TEST_TRANSPORT", 00:22:04.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.745 "adrfam": "ipv4", 00:22:04.745 "trsvcid": "$NVMF_PORT", 00:22:04.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.745 "hdgst": ${hdgst:-false}, 00:22:04.745 "ddgst": ${ddgst:-false} 00:22:04.745 }, 00:22:04.745 "method": "bdev_nvme_attach_controller" 00:22:04.745 } 00:22:04.745 EOF 00:22:04.745 )") 00:22:04.745 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:04.745 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.745 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.745 { 00:22:04.745 "params": { 00:22:04.745 "name": "Nvme$subsystem", 00:22:04.745 "trtype": "$TEST_TRANSPORT", 00:22:04.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.745 "adrfam": "ipv4", 00:22:04.745 "trsvcid": "$NVMF_PORT", 00:22:04.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.745 "hdgst": ${hdgst:-false}, 00:22:04.745 "ddgst": ${ddgst:-false} 00:22:04.745 }, 00:22:04.745 "method": "bdev_nvme_attach_controller" 00:22:04.745 } 00:22:04.745 EOF 00:22:04.745 )") 00:22:04.745 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:04.745 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.745 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.745 { 00:22:04.745 "params": { 00:22:04.745 "name": "Nvme$subsystem", 00:22:04.745 "trtype": "$TEST_TRANSPORT", 00:22:04.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.745 "adrfam": "ipv4", 00:22:04.745 "trsvcid": "$NVMF_PORT", 00:22:04.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.745 "hdgst": ${hdgst:-false}, 00:22:04.745 "ddgst": ${ddgst:-false} 00:22:04.745 }, 00:22:04.745 "method": "bdev_nvme_attach_controller" 00:22:04.745 } 00:22:04.745 EOF 00:22:04.745 )") 00:22:04.745 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:04.745 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.745 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.745 { 00:22:04.745 "params": { 00:22:04.745 "name": "Nvme$subsystem", 00:22:04.745 "trtype": "$TEST_TRANSPORT", 00:22:04.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.745 "adrfam": "ipv4", 00:22:04.745 "trsvcid": "$NVMF_PORT", 00:22:04.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.745 "hdgst": ${hdgst:-false}, 00:22:04.745 "ddgst": ${ddgst:-false} 00:22:04.745 }, 00:22:04.745 "method": "bdev_nvme_attach_controller" 00:22:04.745 } 00:22:04.745 EOF 00:22:04.745 )") 00:22:04.745 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:04.745 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:04.745 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:04.745 { 00:22:04.745 "params": { 00:22:04.745 "name": "Nvme$subsystem", 00:22:04.745 "trtype": "$TEST_TRANSPORT", 00:22:04.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.745 "adrfam": "ipv4", 00:22:04.745 "trsvcid": "$NVMF_PORT", 00:22:04.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.745 "hdgst": ${hdgst:-false}, 00:22:04.745 "ddgst": ${ddgst:-false} 00:22:04.745 }, 00:22:04.745 "method": "bdev_nvme_attach_controller" 00:22:04.745 } 00:22:04.745 EOF 00:22:04.745 )") 00:22:04.745 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:04.745 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:04.745 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:04.745 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:04.745 "params": { 00:22:04.745 "name": "Nvme1", 00:22:04.745 "trtype": "tcp", 00:22:04.745 "traddr": "10.0.0.2", 00:22:04.745 "adrfam": "ipv4", 00:22:04.745 "trsvcid": "4420", 00:22:04.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.745 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:04.745 "hdgst": false, 00:22:04.745 "ddgst": false 00:22:04.745 }, 00:22:04.745 "method": "bdev_nvme_attach_controller" 00:22:04.745 },{ 00:22:04.745 "params": { 00:22:04.745 "name": "Nvme2", 00:22:04.745 "trtype": "tcp", 00:22:04.745 "traddr": "10.0.0.2", 00:22:04.745 "adrfam": "ipv4", 00:22:04.745 "trsvcid": "4420", 00:22:04.745 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:04.745 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:04.745 "hdgst": false, 00:22:04.745 "ddgst": false 00:22:04.745 }, 00:22:04.745 "method": "bdev_nvme_attach_controller" 00:22:04.745 },{ 00:22:04.745 "params": { 00:22:04.745 "name": "Nvme3", 00:22:04.745 "trtype": "tcp", 00:22:04.745 "traddr": "10.0.0.2", 00:22:04.745 "adrfam": "ipv4", 00:22:04.745 "trsvcid": "4420", 00:22:04.745 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:04.745 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:04.745 "hdgst": false, 00:22:04.745 "ddgst": false 00:22:04.745 }, 00:22:04.745 "method": "bdev_nvme_attach_controller" 00:22:04.745 },{ 00:22:04.745 "params": { 00:22:04.745 "name": "Nvme4", 00:22:04.745 "trtype": "tcp", 00:22:04.745 "traddr": "10.0.0.2", 00:22:04.745 "adrfam": "ipv4", 00:22:04.745 "trsvcid": "4420", 00:22:04.745 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:04.745 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:04.745 "hdgst": false, 00:22:04.745 "ddgst": false 00:22:04.745 }, 00:22:04.745 "method": "bdev_nvme_attach_controller" 00:22:04.745 },{ 00:22:04.745 "params": { 00:22:04.745 "name": "Nvme5", 00:22:04.745 "trtype": "tcp", 00:22:04.745 "traddr": "10.0.0.2", 00:22:04.745 "adrfam": "ipv4", 00:22:04.745 "trsvcid": "4420", 00:22:04.745 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:04.745 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:04.745 "hdgst": false, 00:22:04.745 "ddgst": false 00:22:04.745 }, 00:22:04.745 "method": "bdev_nvme_attach_controller" 00:22:04.745 },{ 00:22:04.745 "params": { 00:22:04.745 "name": "Nvme6", 00:22:04.745 "trtype": "tcp", 00:22:04.745 "traddr": "10.0.0.2", 00:22:04.745 "adrfam": "ipv4", 00:22:04.745 "trsvcid": "4420", 00:22:04.745 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:04.745 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:04.745 "hdgst": false, 00:22:04.745 "ddgst": false 00:22:04.745 }, 00:22:04.745 "method": "bdev_nvme_attach_controller" 00:22:04.745 },{ 00:22:04.745 "params": { 00:22:04.745 "name": "Nvme7", 00:22:04.745 "trtype": "tcp", 00:22:04.745 "traddr": "10.0.0.2", 00:22:04.746 "adrfam": "ipv4", 00:22:04.746 "trsvcid": "4420", 00:22:04.746 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:04.746 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:04.746 "hdgst": false, 00:22:04.746 "ddgst": false 00:22:04.746 }, 00:22:04.746 "method": "bdev_nvme_attach_controller" 00:22:04.746 },{ 00:22:04.746 "params": { 00:22:04.746 "name": "Nvme8", 00:22:04.746 "trtype": "tcp", 00:22:04.746 "traddr": "10.0.0.2", 00:22:04.746 "adrfam": "ipv4", 00:22:04.746 "trsvcid": "4420", 00:22:04.746 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:04.746 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:04.746 "hdgst": false, 00:22:04.746 "ddgst": false 00:22:04.746 }, 00:22:04.746 "method": "bdev_nvme_attach_controller" 00:22:04.746 },{ 00:22:04.746 "params": { 00:22:04.746 "name": "Nvme9", 00:22:04.746 "trtype": "tcp", 00:22:04.746 "traddr": "10.0.0.2", 00:22:04.746 "adrfam": "ipv4", 00:22:04.746 "trsvcid": "4420", 00:22:04.746 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:04.746 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:04.746 "hdgst": false, 00:22:04.746 "ddgst": false 00:22:04.746 }, 00:22:04.746 "method": "bdev_nvme_attach_controller" 00:22:04.746 },{ 00:22:04.746 "params": { 00:22:04.746 "name": "Nvme10", 00:22:04.746 "trtype": "tcp", 00:22:04.746 "traddr": "10.0.0.2", 00:22:04.746 "adrfam": "ipv4", 00:22:04.746 "trsvcid": "4420", 00:22:04.746 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:04.746 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:04.746 "hdgst": false, 00:22:04.746 "ddgst": false 00:22:04.746 }, 00:22:04.746 "method": "bdev_nvme_attach_controller" 00:22:04.746 }' 00:22:04.746 [2024-07-25 19:12:57.077988] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:04.746 [2024-07-25 19:12:57.078092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid949014 ] 00:22:04.746 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.746 [2024-07-25 19:12:57.151028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.004 [2024-07-25 19:12:57.262857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.409 Running I/O for 1 seconds... 00:22:07.784 00:22:07.784 Latency(us) 00:22:07.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:07.784 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.784 Verification LBA range: start 0x0 length 0x400 00:22:07.784 Nvme1n1 : 1.18 217.04 13.56 0.00 0.00 290313.10 20486.07 281173.71 00:22:07.784 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.784 Verification LBA range: start 0x0 length 0x400 00:22:07.784 Nvme2n1 : 1.16 220.50 13.78 0.00 0.00 283177.72 20097.71 256318.58 00:22:07.784 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.784 Verification LBA range: start 0x0 length 0x400 00:22:07.784 Nvme3n1 : 1.18 217.78 13.61 0.00 0.00 281748.29 32816.55 287387.50 00:22:07.784 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.784 Verification LBA range: start 0x0 length 0x400 00:22:07.784 Nvme4n1 : 1.17 218.67 13.67 0.00 0.00 276308.39 23592.96 287387.50 00:22:07.784 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.784 Verification LBA range: start 0x0 length 0x400 00:22:07.784 Nvme5n1 : 1.08 177.24 11.08 0.00 0.00 333826.53 24855.13 295154.73 00:22:07.784 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.784 Verification LBA range: start 0x0 length 0x400 00:22:07.784 Nvme6n1 : 1.19 215.83 13.49 0.00 0.00 271480.60 18447.17 282727.16 00:22:07.784 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.784 Verification LBA range: start 0x0 length 0x400 00:22:07.784 Nvme7n1 : 1.20 213.87 13.37 0.00 0.00 269831.96 21165.70 293601.28 00:22:07.784 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.784 Verification LBA range: start 0x0 length 0x400 00:22:07.784 Nvme8n1 : 1.18 220.16 13.76 0.00 0.00 255949.82 9175.04 274959.93 00:22:07.784 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.784 Verification LBA range: start 0x0 length 0x400 00:22:07.784 Nvme9n1 : 1.20 213.07 13.32 0.00 0.00 262333.25 19903.53 321563.31 00:22:07.784 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.784 Verification LBA range: start 0x0 length 0x400 00:22:07.784 Nvme10n1 : 1.19 214.89 13.43 0.00 0.00 255448.94 21262.79 265639.25 00:22:07.784 =================================================================================================================== 00:22:07.784 Total : 2129.04 133.06 0.00 0.00 276578.43 9175.04 321563.31 00:22:08.042 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:22:08.042 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:08.042 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:08.042 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:08.042 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:08.042 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:08.042 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:22:08.042 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:08.042 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:22:08.042 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:08.042 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:08.043 rmmod nvme_tcp 00:22:08.043 rmmod nvme_fabrics 00:22:08.043 rmmod nvme_keyring 00:22:08.043 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:08.043 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:22:08.043 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:22:08.043 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 948401 ']' 00:22:08.043 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 948401 00:22:08.043 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 948401 ']' 00:22:08.043 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 948401 00:22:08.043 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:22:08.043 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:08.043 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 948401 00:22:08.043 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:08.043 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:08.043 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 948401' 00:22:08.043 killing process with pid 948401 00:22:08.043 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 948401 00:22:08.043 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 948401 00:22:08.611 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:08.611 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:08.611 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:08.611 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:08.611 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:08.611 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.611 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.611 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:10.516 00:22:10.516 real 0m12.934s 00:22:10.516 user 0m37.054s 00:22:10.516 sys 0m3.579s 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:10.516 ************************************ 00:22:10.516 END TEST nvmf_shutdown_tc1 00:22:10.516 ************************************ 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:10.516 ************************************ 00:22:10.516 START TEST nvmf_shutdown_tc2 00:22:10.516 ************************************ 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:22:10.516 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:10.517 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:10.517 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:10.517 Found net devices under 0000:09:00.0: cvl_0_0 00:22:10.517 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:10.776 Found net devices under 0000:09:00.1: cvl_0_1 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:10.776 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:10.776 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:10.776 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:10.776 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:10.776 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:10.776 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:10.776 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:10.776 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:10.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:10.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:22:10.776 00:22:10.776 --- 10.0.0.2 ping statistics --- 00:22:10.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.776 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:22:10.776 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:10.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:10.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:22:10.776 00:22:10.776 --- 10.0.0.1 ping statistics --- 00:22:10.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.776 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:22:10.776 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:10.776 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:22:10.776 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:10.776 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:10.777 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:10.777 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:10.777 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:10.777 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:10.777 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:10.777 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:10.777 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:10.777 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:10.777 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:10.777 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=949786 00:22:10.777 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:10.777 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 949786 00:22:10.777 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 949786 ']' 00:22:10.777 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.777 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:10.777 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.777 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:10.777 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:10.777 [2024-07-25 19:13:03.208528] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:10.777 [2024-07-25 19:13:03.208619] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.043 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.043 [2024-07-25 19:13:03.289480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:11.043 [2024-07-25 19:13:03.408551] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.043 [2024-07-25 19:13:03.408616] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.043 [2024-07-25 19:13:03.408633] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.043 [2024-07-25 19:13:03.408646] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.043 [2024-07-25 19:13:03.408657] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.043 [2024-07-25 19:13:03.408754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.043 [2024-07-25 19:13:03.408851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:11.043 [2024-07-25 19:13:03.408918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:11.043 [2024-07-25 19:13:03.408920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:11.981 [2024-07-25 19:13:04.158662] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.981 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:11.981 Malloc1 00:22:11.981 [2024-07-25 19:13:04.233829] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.981 Malloc2 00:22:11.981 Malloc3 00:22:11.981 Malloc4 00:22:11.981 Malloc5 00:22:12.239 Malloc6 00:22:12.239 Malloc7 00:22:12.239 Malloc8 00:22:12.239 Malloc9 00:22:12.239 Malloc10 00:22:12.239 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.239 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:12.239 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:12.239 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:12.498 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=950081 00:22:12.498 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 950081 /var/tmp/bdevperf.sock 00:22:12.498 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 950081 ']' 00:22:12.498 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.498 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:12.498 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:12.498 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:12.498 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:22:12.498 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.498 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:22:12.498 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:12.498 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:12.498 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:12.498 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:12.498 { 00:22:12.498 "params": { 00:22:12.498 "name": "Nvme$subsystem", 00:22:12.498 "trtype": "$TEST_TRANSPORT", 00:22:12.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.498 "adrfam": "ipv4", 00:22:12.498 "trsvcid": "$NVMF_PORT", 00:22:12.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.498 "hdgst": ${hdgst:-false}, 00:22:12.498 "ddgst": ${ddgst:-false} 00:22:12.498 }, 00:22:12.498 "method": "bdev_nvme_attach_controller" 00:22:12.498 } 00:22:12.498 EOF 00:22:12.498 )") 00:22:12.498 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:12.498 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:12.498 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:12.498 { 00:22:12.498 "params": { 00:22:12.498 "name": "Nvme$subsystem", 00:22:12.498 "trtype": "$TEST_TRANSPORT", 00:22:12.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.498 "adrfam": "ipv4", 00:22:12.498 "trsvcid": "$NVMF_PORT", 00:22:12.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.498 "hdgst": ${hdgst:-false}, 00:22:12.498 "ddgst": ${ddgst:-false} 00:22:12.498 }, 00:22:12.498 "method": "bdev_nvme_attach_controller" 00:22:12.498 } 00:22:12.498 EOF 00:22:12.498 )") 00:22:12.498 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:12.498 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:12.498 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:12.498 { 00:22:12.498 "params": { 00:22:12.498 "name": "Nvme$subsystem", 00:22:12.498 "trtype": "$TEST_TRANSPORT", 00:22:12.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.498 "adrfam": "ipv4", 00:22:12.498 "trsvcid": "$NVMF_PORT", 00:22:12.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.498 "hdgst": ${hdgst:-false}, 00:22:12.498 "ddgst": ${ddgst:-false} 00:22:12.498 }, 00:22:12.498 "method": "bdev_nvme_attach_controller" 00:22:12.498 } 00:22:12.498 EOF 00:22:12.498 )") 00:22:12.498 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:12.498 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:12.498 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:12.498 { 00:22:12.499 "params": { 00:22:12.499 "name": "Nvme$subsystem", 00:22:12.499 "trtype": "$TEST_TRANSPORT", 00:22:12.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.499 "adrfam": "ipv4", 00:22:12.499 "trsvcid": "$NVMF_PORT", 00:22:12.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.499 "hdgst": ${hdgst:-false}, 00:22:12.499 "ddgst": ${ddgst:-false} 00:22:12.499 }, 00:22:12.499 "method": "bdev_nvme_attach_controller" 00:22:12.499 } 00:22:12.499 EOF 00:22:12.499 )") 00:22:12.499 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:12.499 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:12.499 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:12.499 { 00:22:12.499 "params": { 00:22:12.499 "name": "Nvme$subsystem", 00:22:12.499 "trtype": "$TEST_TRANSPORT", 00:22:12.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.499 "adrfam": "ipv4", 00:22:12.499 "trsvcid": "$NVMF_PORT", 00:22:12.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.499 "hdgst": ${hdgst:-false}, 00:22:12.499 "ddgst": ${ddgst:-false} 00:22:12.499 }, 00:22:12.499 "method": "bdev_nvme_attach_controller" 00:22:12.499 } 00:22:12.499 EOF 00:22:12.499 )") 00:22:12.499 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:12.499 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:12.499 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:12.499 { 00:22:12.499 "params": { 00:22:12.499 "name": "Nvme$subsystem", 00:22:12.499 "trtype": "$TEST_TRANSPORT", 00:22:12.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.499 "adrfam": "ipv4", 00:22:12.499 "trsvcid": "$NVMF_PORT", 00:22:12.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.499 "hdgst": ${hdgst:-false}, 00:22:12.499 "ddgst": ${ddgst:-false} 00:22:12.499 }, 00:22:12.499 "method": "bdev_nvme_attach_controller" 00:22:12.499 } 00:22:12.499 EOF 00:22:12.499 )") 00:22:12.499 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:12.499 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:12.499 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:12.499 { 00:22:12.499 "params": { 00:22:12.499 "name": "Nvme$subsystem", 00:22:12.499 "trtype": "$TEST_TRANSPORT", 00:22:12.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.499 "adrfam": "ipv4", 00:22:12.499 "trsvcid": "$NVMF_PORT", 00:22:12.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.499 "hdgst": ${hdgst:-false}, 00:22:12.499 "ddgst": ${ddgst:-false} 00:22:12.499 }, 00:22:12.499 "method": "bdev_nvme_attach_controller" 00:22:12.499 } 00:22:12.499 EOF 00:22:12.499 )") 00:22:12.499 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:12.499 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:12.499 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:12.499 { 00:22:12.499 "params": { 00:22:12.499 "name": "Nvme$subsystem", 00:22:12.499 "trtype": "$TEST_TRANSPORT", 00:22:12.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.499 "adrfam": "ipv4", 00:22:12.499 "trsvcid": "$NVMF_PORT", 00:22:12.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.499 "hdgst": ${hdgst:-false}, 00:22:12.499 "ddgst": ${ddgst:-false} 00:22:12.499 }, 00:22:12.499 "method": "bdev_nvme_attach_controller" 00:22:12.499 } 00:22:12.499 EOF 00:22:12.499 )") 00:22:12.499 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:12.499 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:12.499 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:12.499 { 00:22:12.499 "params": { 00:22:12.499 "name": "Nvme$subsystem", 00:22:12.499 "trtype": "$TEST_TRANSPORT", 00:22:12.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.499 "adrfam": "ipv4", 00:22:12.499 "trsvcid": "$NVMF_PORT", 00:22:12.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.499 "hdgst": ${hdgst:-false}, 00:22:12.499 "ddgst": ${ddgst:-false} 00:22:12.499 }, 00:22:12.499 "method": "bdev_nvme_attach_controller" 00:22:12.499 } 00:22:12.499 EOF 00:22:12.499 )") 00:22:12.499 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:12.499 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:12.499 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:12.499 { 00:22:12.499 "params": { 00:22:12.499 "name": "Nvme$subsystem", 00:22:12.499 "trtype": "$TEST_TRANSPORT", 00:22:12.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.499 "adrfam": "ipv4", 00:22:12.499 "trsvcid": "$NVMF_PORT", 00:22:12.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.499 "hdgst": ${hdgst:-false}, 00:22:12.499 "ddgst": ${ddgst:-false} 00:22:12.499 }, 00:22:12.499 "method": "bdev_nvme_attach_controller" 00:22:12.499 } 00:22:12.499 EOF 00:22:12.499 )") 00:22:12.499 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:12.499 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:22:12.499 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:22:12.499 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:12.499 "params": { 00:22:12.499 "name": "Nvme1", 00:22:12.499 "trtype": "tcp", 00:22:12.499 "traddr": "10.0.0.2", 00:22:12.499 "adrfam": "ipv4", 00:22:12.499 "trsvcid": "4420", 00:22:12.499 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.499 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:12.499 "hdgst": false, 00:22:12.499 "ddgst": false 00:22:12.499 }, 00:22:12.499 "method": "bdev_nvme_attach_controller" 00:22:12.499 },{ 00:22:12.499 "params": { 00:22:12.499 "name": "Nvme2", 00:22:12.499 "trtype": "tcp", 00:22:12.499 "traddr": "10.0.0.2", 00:22:12.499 "adrfam": "ipv4", 00:22:12.499 "trsvcid": "4420", 00:22:12.499 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:12.499 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:12.499 "hdgst": false, 00:22:12.499 "ddgst": false 00:22:12.499 }, 00:22:12.499 "method": "bdev_nvme_attach_controller" 00:22:12.499 },{ 00:22:12.499 "params": { 00:22:12.499 "name": "Nvme3", 00:22:12.499 "trtype": "tcp", 00:22:12.499 "traddr": "10.0.0.2", 00:22:12.499 "adrfam": "ipv4", 00:22:12.499 "trsvcid": "4420", 00:22:12.499 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:12.499 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:12.499 "hdgst": false, 00:22:12.499 "ddgst": false 00:22:12.499 }, 00:22:12.499 "method": "bdev_nvme_attach_controller" 00:22:12.499 },{ 00:22:12.499 "params": { 00:22:12.499 "name": "Nvme4", 00:22:12.499 "trtype": "tcp", 00:22:12.499 "traddr": "10.0.0.2", 00:22:12.499 "adrfam": "ipv4", 00:22:12.499 "trsvcid": "4420", 00:22:12.499 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:12.499 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:12.499 "hdgst": false, 00:22:12.499 "ddgst": false 00:22:12.499 }, 00:22:12.499 "method": "bdev_nvme_attach_controller" 00:22:12.499 },{ 00:22:12.499 "params": { 00:22:12.499 "name": "Nvme5", 00:22:12.499 "trtype": "tcp", 00:22:12.499 "traddr": "10.0.0.2", 00:22:12.499 "adrfam": "ipv4", 00:22:12.499 "trsvcid": "4420", 00:22:12.499 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:12.499 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:12.499 "hdgst": false, 00:22:12.499 "ddgst": false 00:22:12.499 }, 00:22:12.499 "method": "bdev_nvme_attach_controller" 00:22:12.499 },{ 00:22:12.499 "params": { 00:22:12.499 "name": "Nvme6", 00:22:12.499 "trtype": "tcp", 00:22:12.499 "traddr": "10.0.0.2", 00:22:12.499 "adrfam": "ipv4", 00:22:12.499 "trsvcid": "4420", 00:22:12.499 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:12.499 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:12.499 "hdgst": false, 00:22:12.499 "ddgst": false 00:22:12.499 }, 00:22:12.499 "method": "bdev_nvme_attach_controller" 00:22:12.499 },{ 00:22:12.499 "params": { 00:22:12.499 "name": "Nvme7", 00:22:12.499 "trtype": "tcp", 00:22:12.500 "traddr": "10.0.0.2", 00:22:12.500 "adrfam": "ipv4", 00:22:12.500 "trsvcid": "4420", 00:22:12.500 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:12.500 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:12.500 "hdgst": false, 00:22:12.500 "ddgst": false 00:22:12.500 }, 00:22:12.500 "method": "bdev_nvme_attach_controller" 00:22:12.500 },{ 00:22:12.500 "params": { 00:22:12.500 "name": "Nvme8", 00:22:12.500 "trtype": "tcp", 00:22:12.500 "traddr": "10.0.0.2", 00:22:12.500 "adrfam": "ipv4", 00:22:12.500 "trsvcid": "4420", 00:22:12.500 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:12.500 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:12.500 "hdgst": false, 00:22:12.500 "ddgst": false 00:22:12.500 }, 00:22:12.500 "method": "bdev_nvme_attach_controller" 00:22:12.500 },{ 00:22:12.500 "params": { 00:22:12.500 "name": "Nvme9", 00:22:12.500 "trtype": "tcp", 00:22:12.500 "traddr": "10.0.0.2", 00:22:12.500 "adrfam": "ipv4", 00:22:12.500 "trsvcid": "4420", 00:22:12.500 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:12.500 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:12.500 "hdgst": false, 00:22:12.500 "ddgst": false 00:22:12.500 }, 00:22:12.500 "method": "bdev_nvme_attach_controller" 00:22:12.500 },{ 00:22:12.500 "params": { 00:22:12.500 "name": "Nvme10", 00:22:12.500 "trtype": "tcp", 00:22:12.500 "traddr": "10.0.0.2", 00:22:12.500 "adrfam": "ipv4", 00:22:12.500 "trsvcid": "4420", 00:22:12.500 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:12.500 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:12.500 "hdgst": false, 00:22:12.500 "ddgst": false 00:22:12.500 }, 00:22:12.500 "method": "bdev_nvme_attach_controller" 00:22:12.500 }' 00:22:12.500 [2024-07-25 19:13:04.765583] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:12.500 [2024-07-25 19:13:04.765666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid950081 ] 00:22:12.500 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.500 [2024-07-25 19:13:04.838548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.500 [2024-07-25 19:13:04.950522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.398 Running I/O for 10 seconds... 00:22:14.398 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:14.398 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:14.399 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:14.399 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.399 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.399 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.399 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:14.399 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:14.399 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:14.399 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:22:14.399 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:22:14.399 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:14.399 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:14.399 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:14.399 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:14.399 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.399 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.399 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.399 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:14.399 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:14.399 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:14.657 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:14.657 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:14.657 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:14.657 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:14.657 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.657 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.657 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.657 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:14.657 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:14.657 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:14.915 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:14.915 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:14.915 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:14.915 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:14.915 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.915 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.915 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.915 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:14.915 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:14.915 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:22:14.915 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:22:14.915 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:22:14.915 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 950081 00:22:14.915 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 950081 ']' 00:22:14.915 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 950081 00:22:14.915 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:14.915 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:14.915 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 950081 00:22:14.915 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:14.915 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:14.915 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 950081' 00:22:14.915 killing process with pid 950081 00:22:14.915 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 950081 00:22:14.915 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 950081 00:22:15.173 Received shutdown signal, test time was about 0.948062 seconds 00:22:15.173 00:22:15.173 Latency(us) 00:22:15.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.173 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.173 Verification LBA range: start 0x0 length 0x400 00:22:15.173 Nvme1n1 : 0.94 203.79 12.74 0.00 0.00 310403.48 25049.32 310689.19 00:22:15.173 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.173 Verification LBA range: start 0x0 length 0x400 00:22:15.173 Nvme2n1 : 0.88 150.90 9.43 0.00 0.00 402340.30 13010.11 316902.97 00:22:15.173 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.173 Verification LBA range: start 0x0 length 0x400 00:22:15.173 Nvme3n1 : 0.94 204.91 12.81 0.00 0.00 296536.49 22233.69 315349.52 00:22:15.173 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.173 Verification LBA range: start 0x0 length 0x400 00:22:15.173 Nvme4n1 : 0.95 202.89 12.68 0.00 0.00 293278.15 42525.58 278066.82 00:22:15.173 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.173 Verification LBA range: start 0x0 length 0x400 00:22:15.173 Nvme5n1 : 0.92 208.12 13.01 0.00 0.00 279144.93 50098.63 279620.27 00:22:15.173 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.173 Verification LBA range: start 0x0 length 0x400 00:22:15.173 Nvme6n1 : 0.93 205.69 12.86 0.00 0.00 276826.39 21262.79 295154.73 00:22:15.173 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.173 Verification LBA range: start 0x0 length 0x400 00:22:15.173 Nvme7n1 : 0.95 202.69 12.67 0.00 0.00 274753.11 28156.21 316902.97 00:22:15.173 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.173 Verification LBA range: start 0x0 length 0x400 00:22:15.173 Nvme8n1 : 0.93 206.90 12.93 0.00 0.00 263423.37 33010.73 298261.62 00:22:15.173 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.173 Verification LBA range: start 0x0 length 0x400 00:22:15.173 Nvme9n1 : 0.89 147.54 9.22 0.00 0.00 347734.72 2597.17 320009.86 00:22:15.173 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.173 Verification LBA range: start 0x0 length 0x400 00:22:15.173 Nvme10n1 : 0.89 143.39 8.96 0.00 0.00 359696.69 23690.05 346418.44 00:22:15.173 =================================================================================================================== 00:22:15.173 Total : 1876.84 117.30 0.00 0.00 304186.34 2597.17 346418.44 00:22:15.431 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 949786 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:16.365 rmmod nvme_tcp 00:22:16.365 rmmod nvme_fabrics 00:22:16.365 rmmod nvme_keyring 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 949786 ']' 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 949786 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 949786 ']' 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 949786 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 949786 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 949786' 00:22:16.365 killing process with pid 949786 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 949786 00:22:16.365 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 949786 00:22:16.932 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:16.932 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:16.932 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:16.932 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:16.932 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:16.932 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.932 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.932 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:19.469 00:22:19.469 real 0m8.376s 00:22:19.469 user 0m25.688s 00:22:19.469 sys 0m1.583s 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:19.469 ************************************ 00:22:19.469 END TEST nvmf_shutdown_tc2 00:22:19.469 ************************************ 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:19.469 ************************************ 00:22:19.469 START TEST nvmf_shutdown_tc3 00:22:19.469 ************************************ 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:19.469 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:19.469 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:19.469 Found net devices under 0000:09:00.0: cvl_0_0 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.469 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:19.470 Found net devices under 0000:09:00.1: cvl_0_1 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:19.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:22:19.470 00:22:19.470 --- 10.0.0.2 ping statistics --- 00:22:19.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.470 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:19.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:22:19.470 00:22:19.470 --- 10.0.0.1 ping statistics --- 00:22:19.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.470 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=950998 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 950998 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 950998 ']' 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:19.470 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:19.470 [2024-07-25 19:13:11.605312] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:19.470 [2024-07-25 19:13:11.605394] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.470 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.470 [2024-07-25 19:13:11.684785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:19.470 [2024-07-25 19:13:11.804302] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.470 [2024-07-25 19:13:11.804360] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.470 [2024-07-25 19:13:11.804376] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.470 [2024-07-25 19:13:11.804389] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.470 [2024-07-25 19:13:11.804409] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.470 [2024-07-25 19:13:11.804499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.470 [2024-07-25 19:13:11.804608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:19.470 [2024-07-25 19:13:11.804676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:19.470 [2024-07-25 19:13:11.804679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:20.402 [2024-07-25 19:13:12.552629] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.402 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:20.402 Malloc1 00:22:20.402 [2024-07-25 19:13:12.628005] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:20.402 Malloc2 00:22:20.402 Malloc3 00:22:20.402 Malloc4 00:22:20.402 Malloc5 00:22:20.402 Malloc6 00:22:20.660 Malloc7 00:22:20.660 Malloc8 00:22:20.660 Malloc9 00:22:20.660 Malloc10 00:22:20.660 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.660 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:20.660 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=951193 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 951193 /var/tmp/bdevperf.sock 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 951193 ']' 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:20.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:20.661 { 00:22:20.661 "params": { 00:22:20.661 "name": "Nvme$subsystem", 00:22:20.661 "trtype": "$TEST_TRANSPORT", 00:22:20.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.661 "adrfam": "ipv4", 00:22:20.661 "trsvcid": "$NVMF_PORT", 00:22:20.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.661 "hdgst": ${hdgst:-false}, 00:22:20.661 "ddgst": ${ddgst:-false} 00:22:20.661 }, 00:22:20.661 "method": "bdev_nvme_attach_controller" 00:22:20.661 } 00:22:20.661 EOF 00:22:20.661 )") 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:20.661 { 00:22:20.661 "params": { 00:22:20.661 "name": "Nvme$subsystem", 00:22:20.661 "trtype": "$TEST_TRANSPORT", 00:22:20.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.661 "adrfam": "ipv4", 00:22:20.661 "trsvcid": "$NVMF_PORT", 00:22:20.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.661 "hdgst": ${hdgst:-false}, 00:22:20.661 "ddgst": ${ddgst:-false} 00:22:20.661 }, 00:22:20.661 "method": "bdev_nvme_attach_controller" 00:22:20.661 } 00:22:20.661 EOF 00:22:20.661 )") 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:20.661 { 00:22:20.661 "params": { 00:22:20.661 "name": "Nvme$subsystem", 00:22:20.661 "trtype": "$TEST_TRANSPORT", 00:22:20.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.661 "adrfam": "ipv4", 00:22:20.661 "trsvcid": "$NVMF_PORT", 00:22:20.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.661 "hdgst": ${hdgst:-false}, 00:22:20.661 "ddgst": ${ddgst:-false} 00:22:20.661 }, 00:22:20.661 "method": "bdev_nvme_attach_controller" 00:22:20.661 } 00:22:20.661 EOF 00:22:20.661 )") 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:20.661 { 00:22:20.661 "params": { 00:22:20.661 "name": "Nvme$subsystem", 00:22:20.661 "trtype": "$TEST_TRANSPORT", 00:22:20.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.661 "adrfam": "ipv4", 00:22:20.661 "trsvcid": "$NVMF_PORT", 00:22:20.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.661 "hdgst": ${hdgst:-false}, 00:22:20.661 "ddgst": ${ddgst:-false} 00:22:20.661 }, 00:22:20.661 "method": "bdev_nvme_attach_controller" 00:22:20.661 } 00:22:20.661 EOF 00:22:20.661 )") 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:20.661 { 00:22:20.661 "params": { 00:22:20.661 "name": "Nvme$subsystem", 00:22:20.661 "trtype": "$TEST_TRANSPORT", 00:22:20.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.661 "adrfam": "ipv4", 00:22:20.661 "trsvcid": "$NVMF_PORT", 00:22:20.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.661 "hdgst": ${hdgst:-false}, 00:22:20.661 "ddgst": ${ddgst:-false} 00:22:20.661 }, 00:22:20.661 "method": "bdev_nvme_attach_controller" 00:22:20.661 } 00:22:20.661 EOF 00:22:20.661 )") 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:20.661 { 00:22:20.661 "params": { 00:22:20.661 "name": "Nvme$subsystem", 00:22:20.661 "trtype": "$TEST_TRANSPORT", 00:22:20.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.661 "adrfam": "ipv4", 00:22:20.661 "trsvcid": "$NVMF_PORT", 00:22:20.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.661 "hdgst": ${hdgst:-false}, 00:22:20.661 "ddgst": ${ddgst:-false} 00:22:20.661 }, 00:22:20.661 "method": "bdev_nvme_attach_controller" 00:22:20.661 } 00:22:20.661 EOF 00:22:20.661 )") 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:20.661 { 00:22:20.661 "params": { 00:22:20.661 "name": "Nvme$subsystem", 00:22:20.661 "trtype": "$TEST_TRANSPORT", 00:22:20.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.661 "adrfam": "ipv4", 00:22:20.661 "trsvcid": "$NVMF_PORT", 00:22:20.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.661 "hdgst": ${hdgst:-false}, 00:22:20.661 "ddgst": ${ddgst:-false} 00:22:20.661 }, 00:22:20.661 "method": "bdev_nvme_attach_controller" 00:22:20.661 } 00:22:20.661 EOF 00:22:20.661 )") 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:20.661 { 00:22:20.661 "params": { 00:22:20.661 "name": "Nvme$subsystem", 00:22:20.661 "trtype": "$TEST_TRANSPORT", 00:22:20.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.661 "adrfam": "ipv4", 00:22:20.661 "trsvcid": "$NVMF_PORT", 00:22:20.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.661 "hdgst": ${hdgst:-false}, 00:22:20.661 "ddgst": ${ddgst:-false} 00:22:20.661 }, 00:22:20.661 "method": "bdev_nvme_attach_controller" 00:22:20.661 } 00:22:20.661 EOF 00:22:20.661 )") 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:20.661 { 00:22:20.661 "params": { 00:22:20.661 "name": "Nvme$subsystem", 00:22:20.661 "trtype": "$TEST_TRANSPORT", 00:22:20.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.661 "adrfam": "ipv4", 00:22:20.661 "trsvcid": "$NVMF_PORT", 00:22:20.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.661 "hdgst": ${hdgst:-false}, 00:22:20.661 "ddgst": ${ddgst:-false} 00:22:20.661 }, 00:22:20.661 "method": "bdev_nvme_attach_controller" 00:22:20.661 } 00:22:20.661 EOF 00:22:20.661 )") 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:20.661 { 00:22:20.661 "params": { 00:22:20.661 "name": "Nvme$subsystem", 00:22:20.661 "trtype": "$TEST_TRANSPORT", 00:22:20.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.661 "adrfam": "ipv4", 00:22:20.661 "trsvcid": "$NVMF_PORT", 00:22:20.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.661 "hdgst": ${hdgst:-false}, 00:22:20.661 "ddgst": ${ddgst:-false} 00:22:20.661 }, 00:22:20.661 "method": "bdev_nvme_attach_controller" 00:22:20.661 } 00:22:20.661 EOF 00:22:20.661 )") 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:22:20.661 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:20.661 "params": { 00:22:20.661 "name": "Nvme1", 00:22:20.661 "trtype": "tcp", 00:22:20.661 "traddr": "10.0.0.2", 00:22:20.661 "adrfam": "ipv4", 00:22:20.661 "trsvcid": "4420", 00:22:20.661 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.661 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:20.661 "hdgst": false, 00:22:20.661 "ddgst": false 00:22:20.661 }, 00:22:20.661 "method": "bdev_nvme_attach_controller" 00:22:20.661 },{ 00:22:20.661 "params": { 00:22:20.661 "name": "Nvme2", 00:22:20.661 "trtype": "tcp", 00:22:20.661 "traddr": "10.0.0.2", 00:22:20.661 "adrfam": "ipv4", 00:22:20.661 "trsvcid": "4420", 00:22:20.661 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:20.661 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:20.661 "hdgst": false, 00:22:20.661 "ddgst": false 00:22:20.661 }, 00:22:20.662 "method": "bdev_nvme_attach_controller" 00:22:20.662 },{ 00:22:20.662 "params": { 00:22:20.662 "name": "Nvme3", 00:22:20.662 "trtype": "tcp", 00:22:20.662 "traddr": "10.0.0.2", 00:22:20.662 "adrfam": "ipv4", 00:22:20.662 "trsvcid": "4420", 00:22:20.662 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:20.662 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:20.662 "hdgst": false, 00:22:20.662 "ddgst": false 00:22:20.662 }, 00:22:20.662 "method": "bdev_nvme_attach_controller" 00:22:20.662 },{ 00:22:20.662 "params": { 00:22:20.662 "name": "Nvme4", 00:22:20.662 "trtype": "tcp", 00:22:20.662 "traddr": "10.0.0.2", 00:22:20.662 "adrfam": "ipv4", 00:22:20.662 "trsvcid": "4420", 00:22:20.662 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:20.662 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:20.662 "hdgst": false, 00:22:20.662 "ddgst": false 00:22:20.662 }, 00:22:20.662 "method": "bdev_nvme_attach_controller" 00:22:20.662 },{ 00:22:20.662 "params": { 00:22:20.662 "name": "Nvme5", 00:22:20.662 "trtype": "tcp", 00:22:20.662 "traddr": "10.0.0.2", 00:22:20.662 "adrfam": "ipv4", 00:22:20.662 "trsvcid": "4420", 00:22:20.662 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:20.662 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:20.662 "hdgst": false, 00:22:20.662 "ddgst": false 00:22:20.662 }, 00:22:20.662 "method": "bdev_nvme_attach_controller" 00:22:20.662 },{ 00:22:20.662 "params": { 00:22:20.662 "name": "Nvme6", 00:22:20.662 "trtype": "tcp", 00:22:20.662 "traddr": "10.0.0.2", 00:22:20.662 "adrfam": "ipv4", 00:22:20.662 "trsvcid": "4420", 00:22:20.662 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:20.662 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:20.662 "hdgst": false, 00:22:20.662 "ddgst": false 00:22:20.662 }, 00:22:20.662 "method": "bdev_nvme_attach_controller" 00:22:20.662 },{ 00:22:20.662 "params": { 00:22:20.662 "name": "Nvme7", 00:22:20.662 "trtype": "tcp", 00:22:20.662 "traddr": "10.0.0.2", 00:22:20.662 "adrfam": "ipv4", 00:22:20.662 "trsvcid": "4420", 00:22:20.662 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:20.662 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:20.662 "hdgst": false, 00:22:20.662 "ddgst": false 00:22:20.662 }, 00:22:20.662 "method": "bdev_nvme_attach_controller" 00:22:20.662 },{ 00:22:20.662 "params": { 00:22:20.662 "name": "Nvme8", 00:22:20.662 "trtype": "tcp", 00:22:20.662 "traddr": "10.0.0.2", 00:22:20.662 "adrfam": "ipv4", 00:22:20.662 "trsvcid": "4420", 00:22:20.662 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:20.662 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:20.662 "hdgst": false, 00:22:20.662 "ddgst": false 00:22:20.662 }, 00:22:20.662 "method": "bdev_nvme_attach_controller" 00:22:20.662 },{ 00:22:20.662 "params": { 00:22:20.662 "name": "Nvme9", 00:22:20.662 "trtype": "tcp", 00:22:20.662 "traddr": "10.0.0.2", 00:22:20.662 "adrfam": "ipv4", 00:22:20.662 "trsvcid": "4420", 00:22:20.662 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:20.662 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:20.662 "hdgst": false, 00:22:20.662 "ddgst": false 00:22:20.662 }, 00:22:20.662 "method": "bdev_nvme_attach_controller" 00:22:20.662 },{ 00:22:20.662 "params": { 00:22:20.662 "name": "Nvme10", 00:22:20.662 "trtype": "tcp", 00:22:20.662 "traddr": "10.0.0.2", 00:22:20.662 "adrfam": "ipv4", 00:22:20.662 "trsvcid": "4420", 00:22:20.662 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:20.662 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:20.662 "hdgst": false, 00:22:20.662 "ddgst": false 00:22:20.662 }, 00:22:20.662 "method": "bdev_nvme_attach_controller" 00:22:20.662 }' 00:22:20.920 [2024-07-25 19:13:13.132988] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:20.920 [2024-07-25 19:13:13.133061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid951193 ] 00:22:20.920 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.920 [2024-07-25 19:13:13.204094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.920 [2024-07-25 19:13:13.315657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.821 Running I/O for 10 seconds... 00:22:22.821 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:22.821 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:22.821 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:22.821 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.821 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:22.821 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.821 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:22.821 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:22.821 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:22.821 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:22.821 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:22:22.821 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:22:22.821 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:22.821 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:22.821 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:22.821 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:22.821 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.821 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:22.821 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.821 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:22.821 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:22.821 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:23.078 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:23.078 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:23.078 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:23.078 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:23.078 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.078 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:23.078 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.078 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:23.078 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:23.078 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:23.346 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:23.346 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:23.346 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:23.346 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:23.346 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.346 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:23.346 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.346 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:23.346 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:23.346 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:22:23.346 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:22:23.346 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:22:23.346 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 950998 00:22:23.346 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 950998 ']' 00:22:23.346 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 950998 00:22:23.346 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:22:23.346 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:23.346 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 950998 00:22:23.346 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:23.346 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:23.346 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 950998' 00:22:23.346 killing process with pid 950998 00:22:23.346 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 950998 00:22:23.346 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 950998 00:22:23.346 [2024-07-25 19:13:15.764537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.764657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.764690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.764703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.764716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.764728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.764740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.764752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.764765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.764777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.764789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.764801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.764813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.764824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.764837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.764849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.764861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.764872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.764885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.764906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.764919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.764931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.764942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.764958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.764971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.764983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.764995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.765006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.765018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.765030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.765042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.765054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.765067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.765078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.765099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.765121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.765134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.765146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.765158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.765170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.765182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.765194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.765205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.765217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.346 [2024-07-25 19:13:15.765229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.765241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.765260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.765274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.765286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.765297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.765309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.765321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.765333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.765345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.765356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.765368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.765380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.765401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.765413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.765424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.765436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.765448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.765459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c920 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.766867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.766903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.766919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.766932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.766943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.766956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.766969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.766981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.766993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767140] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.767673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f420 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.768980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112cde0 is same with the state(5) to be set 00:22:23.347 [2024-07-25 19:13:15.769004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112cde0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.769023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112cde0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.769036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112cde0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.770938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.770988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.771776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d2a0 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.773535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.773563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.773578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.773590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.773602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.773615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.773627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.773638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.773650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.773662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.773674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.348 [2024-07-25 19:13:15.773685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.773697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.773709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.773721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.773733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.773750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.773762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.773774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.773786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.773798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.773810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.773822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.773834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.773846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.773858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.773869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.773881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.773893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.773905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.773917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.773929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.773941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.773952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.773964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.773975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.773987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.773999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.774011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.774022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.774034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.774045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.774057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.774072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.774085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.774096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.774118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.774132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.774144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.774156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.774168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.774180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.774192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.774204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.774216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.774228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.774240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.774252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.774265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.774277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.774289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.774301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.774313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dc40 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.349 [2024-07-25 19:13:15.775794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.775806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.775818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.775830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.775842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.775854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.775866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.775878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.775890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.775905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.775918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.775930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.775942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.775954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.775966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.775978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.775989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.776001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.776013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.776025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.776037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.776049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.776061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.776073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.776091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.776190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.776211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.776223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.776235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.776247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.776259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.776270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.776282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.776295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.776307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.776319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.776335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e100 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.777749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.350 [2024-07-25 19:13:15.777798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.350 [2024-07-25 19:13:15.777826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.350 [2024-07-25 19:13:15.777850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.350 [2024-07-25 19:13:15.777874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.350 [2024-07-25 19:13:15.777899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.350 [2024-07-25 19:13:15.777923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.350 [2024-07-25 19:13:15.777946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.350 [2024-07-25 19:13:15.777967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x292e670 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.778007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.778034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.778048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.778045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.350 [2024-07-25 19:13:15.778061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.778073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.778074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.350 [2024-07-25 19:13:15.778094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.778096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.350 [2024-07-25 19:13:15.778114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.778118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.350 [2024-07-25 19:13:15.778127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.778134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.350 [2024-07-25 19:13:15.778139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.778148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.350 [2024-07-25 19:13:15.778152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.778162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.350 [2024-07-25 19:13:15.778170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.350 [2024-07-25 19:13:15.778176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.350 [2024-07-25 19:13:15.778183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27acf00 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-25 19:13:15.778244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:23.351 the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.351 [2024-07-25 19:13:15.778272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.351 [2024-07-25 19:13:15.778284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.351 [2024-07-25 19:13:15.778296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.351 [2024-07-25 19:13:15.778309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-25 19:13:15.778321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.351 the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with [2024-07-25 19:13:15.778335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:22:23.351 id:0 cdw10:00000000 cdw11:00000000 00:22:23.351 [2024-07-25 19:13:15.778348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.351 [2024-07-25 19:13:15.778361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x277c830 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-25 19:13:15.778432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:23.351 the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.351 [2024-07-25 19:13:15.778462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-25 19:13:15.778458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:23.351 the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.351 [2024-07-25 19:13:15.778483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.351 [2024-07-25 19:13:15.778506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.351 [2024-07-25 19:13:15.778505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.351 [2024-07-25 19:13:15.778526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with [2024-07-25 19:13:15.778534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:22:23.351 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.351 [2024-07-25 19:13:15.778549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x278b360 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.351 [2024-07-25 19:13:15.778612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.351 [2024-07-25 19:13:15.778625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-25 19:13:15.778638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:23.351 the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-25 19:13:15.778653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.351 the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with [2024-07-25 19:13:15.778669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(5) to be set 00:22:23.351 id:0 cdw10:00000000 cdw11:00000000 00:22:23.351 [2024-07-25 19:13:15.778682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with [2024-07-25 19:13:15.778683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:22:23.351 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.351 [2024-07-25 19:13:15.778696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.351 [2024-07-25 19:13:15.778709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.351 [2024-07-25 19:13:15.778721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28622e0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.351 [2024-07-25 19:13:15.778770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.352 [2024-07-25 19:13:15.778772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.352 [2024-07-25 19:13:15.778782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.352 [2024-07-25 19:13:15.778792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.352 [2024-07-25 19:13:15.778798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.352 [2024-07-25 19:13:15.778814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.352 [2024-07-25 19:13:15.778817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.352 [2024-07-25 19:13:15.778827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.352 [2024-07-25 19:13:15.778841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.352 [2024-07-25 19:13:15.778847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.352 [2024-07-25 19:13:15.778862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.352 [2024-07-25 19:13:15.778865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.352 [2024-07-25 19:13:15.778875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.352 [2024-07-25 19:13:15.778887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.352 [2024-07-25 19:13:15.778889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.352 [2024-07-25 19:13:15.778900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.352 [2024-07-25 19:13:15.778912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.352 [2024-07-25 19:13:15.778913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.352 [2024-07-25 19:13:15.778924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e5c0 is same with the state(5) to be set 00:22:23.352 [2024-07-25 19:13:15.778928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.352 [2024-07-25 19:13:15.778942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27a0b50 is same with the state(5) to be set 00:22:23.352 [2024-07-25 19:13:15.778989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.352 [2024-07-25 19:13:15.779010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.352 [2024-07-25 19:13:15.779025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.352 [2024-07-25 19:13:15.779039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.352 [2024-07-25 19:13:15.779052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.352 [2024-07-25 19:13:15.779065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.352 [2024-07-25 19:13:15.779079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.352 [2024-07-25 19:13:15.779094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.352 [2024-07-25 19:13:15.779115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x279f3a0 is same with the state(5) to be set 00:22:23.352 [2024-07-25 19:13:15.779162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.352 [2024-07-25 19:13:15.779183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.352 [2024-07-25 19:13:15.779199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.352 [2024-07-25 19:13:15.779213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.352 [2024-07-25 19:13:15.779231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.352 [2024-07-25 19:13:15.779245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.352 [2024-07-25 19:13:15.779259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.352 [2024-07-25 19:13:15.779272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.352 [2024-07-25 19:13:15.779284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ac4a0 is same with the state(5) to be set 00:22:23.352 [2024-07-25 19:13:15.779670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.352 [2024-07-25 19:13:15.779702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.352 [2024-07-25 19:13:15.779741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.352 [2024-07-25 19:13:15.779767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.352 [2024-07-25 19:13:15.779794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.352 [2024-07-25 19:13:15.779820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.352 [2024-07-25 19:13:15.779847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.352 [2024-07-25 19:13:15.779871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.352 [2024-07-25 19:13:15.779899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.352 [2024-07-25 19:13:15.779917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.352 [2024-07-25 19:13:15.779933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.352 [2024-07-25 19:13:15.779947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.352 [2024-07-25 19:13:15.779963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.352 [2024-07-25 19:13:15.779976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.352 [2024-07-25 19:13:15.779992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.352 [2024-07-25 19:13:15.780005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.352 [2024-07-25 19:13:15.780021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.352 [2024-07-25 19:13:15.780034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.352 [2024-07-25 19:13:15.780049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.352 [2024-07-25 19:13:15.780046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with [2024-07-25 19:13:15.780063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:23.352 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.352 [2024-07-25 19:13:15.780094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.352 [2024-07-25 19:13:15.780099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.352 [2024-07-25 19:13:15.780116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.352 [2024-07-25 19:13:15.780128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.352 [2024-07-25 19:13:15.780134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.352 [2024-07-25 19:13:15.780142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.352 [2024-07-25 19:13:15.780149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.352 [2024-07-25 19:13:15.780155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.352 [2024-07-25 19:13:15.780165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:1[2024-07-25 19:13:15.780167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.352 the state(5) to be set 00:22:23.352 [2024-07-25 19:13:15.780181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 19:13:15.780181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.352 the state(5) to be set 00:22:23.352 [2024-07-25 19:13:15.780197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.352 [2024-07-25 19:13:15.780200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.352 [2024-07-25 19:13:15.780209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.352 [2024-07-25 19:13:15.780215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.352 [2024-07-25 19:13:15.780222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.352 [2024-07-25 19:13:15.780231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.353 [2024-07-25 19:13:15.780234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 19:13:15.780247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.353 the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.353 [2024-07-25 19:13:15.780273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.353 [2024-07-25 19:13:15.780285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.353 [2024-07-25 19:13:15.780310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.353 [2024-07-25 19:13:15.780322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.353 [2024-07-25 19:13:15.780334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.353 [2024-07-25 19:13:15.780347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with [2024-07-25 19:13:15.780360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:1the state(5) to be set 00:22:23.353 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.353 [2024-07-25 19:13:15.780373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.353 [2024-07-25 19:13:15.780385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.353 [2024-07-25 19:13:15.780411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.353 [2024-07-25 19:13:15.780423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:1[2024-07-25 19:13:15.780435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.353 the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 19:13:15.780449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.353 the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.353 [2024-07-25 19:13:15.780476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.353 [2024-07-25 19:13:15.780488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:1[2024-07-25 19:13:15.780501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.353 the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 19:13:15.780517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.353 the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.353 [2024-07-25 19:13:15.780544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.353 [2024-07-25 19:13:15.780556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.353 [2024-07-25 19:13:15.780569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.353 [2024-07-25 19:13:15.780581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:1[2024-07-25 19:13:15.780594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.353 the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 19:13:15.780609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.353 the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.353 [2024-07-25 19:13:15.780636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.353 [2024-07-25 19:13:15.780648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.353 [2024-07-25 19:13:15.780660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.353 [2024-07-25 19:13:15.780677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.353 [2024-07-25 19:13:15.780690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 19:13:15.780703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.353 the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.353 [2024-07-25 19:13:15.780728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.353 [2024-07-25 19:13:15.780741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.353 [2024-07-25 19:13:15.780753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.353 [2024-07-25 19:13:15.780764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.353 [2024-07-25 19:13:15.780766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.354 [2024-07-25 19:13:15.780779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.354 [2024-07-25 19:13:15.780780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.780791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.354 [2024-07-25 19:13:15.780795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.780804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.354 [2024-07-25 19:13:15.780811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.780816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.354 [2024-07-25 19:13:15.780825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.780828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.354 [2024-07-25 19:13:15.780841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with [2024-07-25 19:13:15.780841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:1the state(5) to be set 00:22:23.354 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.780858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.354 [2024-07-25 19:13:15.780860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.780870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.354 [2024-07-25 19:13:15.780877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.780883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.354 [2024-07-25 19:13:15.780891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.780895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with the state(5) to be set 00:22:23.354 [2024-07-25 19:13:15.780907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eaa0 is same with [2024-07-25 19:13:15.780908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:1the state(5) to be set 00:22:23.354 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.780923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.780939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.780953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.780968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.780982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.780998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.781011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.781026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.781040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.781055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.781068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.781083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.781108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.781126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.781140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.781155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.781172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.781188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.781202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.781218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.781232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.781248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.781261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.781276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.781290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.781305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.781319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.781334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.781348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.781363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.781377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.781402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.781415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.781431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.781444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.781460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.781473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.781489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.781502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.781518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.781531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.781551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.781565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.781580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.781594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.781609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.354 [2024-07-25 19:13:15.781623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.354 [2024-07-25 19:13:15.781628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112ef60 is same with [2024-07-25 19:13:15.781638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:1the state(5) to be set 00:22:23.354 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.781655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.781658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112ef60 is same with the state(5) to be set 00:22:23.355 [2024-07-25 19:13:15.781671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:1[2024-07-25 19:13:15.781672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112ef60 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 the state(5) to be set 00:22:23.355 [2024-07-25 19:13:15.781686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112ef60 is same with the state(5) to be set 00:22:23.355 [2024-07-25 19:13:15.781693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.781710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.781723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.781738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.781752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.781768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.781781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.781817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:23.355 [2024-07-25 19:13:15.781894] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2909b60 was disconnected and freed. reset controller. 00:22:23.355 [2024-07-25 19:13:15.782018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.782040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.782062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.782081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.782121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.782139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.782154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.782168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.782183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.782197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.782213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.782226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.782241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.782254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.782270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.782283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.782298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.782311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.782326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.782339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.782354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.782368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.782394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.782414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.782430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.782444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.782459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.782473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.782488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.782505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.782521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.782535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.782550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.782564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.782579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.782592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.782607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.782621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.782636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.782649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.782664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.782679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.782694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.782708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.782723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.782736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.782751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.355 [2024-07-25 19:13:15.782765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.355 [2024-07-25 19:13:15.782780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.782793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.782809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.782823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.782838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.782851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.782870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.782884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.782900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.782913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.782928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.782941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.782956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.782969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.782984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.782998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.783013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.783026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.783040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.783054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.783069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.783082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.783097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.783117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.783133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.783146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.783161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.783174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.783189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.783203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.783218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.783235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.783250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.783263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.783279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.783292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.783307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.783320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.783335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.783348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.783364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.783377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.783397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.783410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.783425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.783438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.783453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.783467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.783482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.783495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.783510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.783523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.783538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.783551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.783566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.783585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.783601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.783614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.783629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.783642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.783656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.783670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.783685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.356 [2024-07-25 19:13:15.783698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.356 [2024-07-25 19:13:15.783712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.783725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.783740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.783753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.783768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.783781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.783795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.783809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.783825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.783838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.783852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.783866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.783880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.783894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.783908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.783921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.784000] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2775c50 was disconnected and freed. reset controller. 00:22:23.357 [2024-07-25 19:13:15.784723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.784749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.784770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.784785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.784801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.784815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.784830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.784844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.784859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.784873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.784888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.784902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.784917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.784930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.784946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.784959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.784975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.784988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.785003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.785017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.785032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.785046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.785061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.785074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.785099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.785125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.785142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.785156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.785171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.785185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.785200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.785214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.785229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.785243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.785258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.785271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.785287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.785302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.785318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.785339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.785355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.785369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.785386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.785401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.785417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.785430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.785446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.785460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.785476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.785489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.785509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.357 [2024-07-25 19:13:15.785523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.357 [2024-07-25 19:13:15.785539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.785552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.785568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.785582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.785598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.785612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.785629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.785643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.785658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.785672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.785688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.785702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.785718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.785731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.785747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.785761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.785777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.785790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.785806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.785825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.785842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.785855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.785872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.785889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.785905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.785919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.785935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.785949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.785964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.785978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.785994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.786007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.786023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.786037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.786052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.786066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.786098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.786119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.786135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.786149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.786164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.786178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.786193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.786207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.786222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.786235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.786250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.786264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.786283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.786297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.786312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.786330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.786346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.786360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.786375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.786390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.786406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.786419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.786435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.786448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.786463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.786476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.786492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.786505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.786521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.786534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.786549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.786562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.786582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.786597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.786611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.786625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.786640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.358 [2024-07-25 19:13:15.786657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.358 [2024-07-25 19:13:15.786673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.359 [2024-07-25 19:13:15.792791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.359 [2024-07-25 19:13:15.792929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:23.359 [2024-07-25 19:13:15.793023] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2883260 was disconnected and freed. reset controller. 00:22:23.359 [2024-07-25 19:13:15.793319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.359 [2024-07-25 19:13:15.793351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.359 [2024-07-25 19:13:15.793377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.359 [2024-07-25 19:13:15.793405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.359 [2024-07-25 19:13:15.793424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.359 [2024-07-25 19:13:15.793440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.359 [2024-07-25 19:13:15.793455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.359 [2024-07-25 19:13:15.793468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.359 [2024-07-25 19:13:15.793482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2943460 is same with the state(5) to be set 00:22:23.359 [2024-07-25 19:13:15.793531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.359 [2024-07-25 19:13:15.793551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.359 [2024-07-25 19:13:15.793567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.359 [2024-07-25 19:13:15.793581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.359 [2024-07-25 19:13:15.793594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.359 [2024-07-25 19:13:15.793607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.359 [2024-07-25 19:13:15.793621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.359 [2024-07-25 19:13:15.793635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.359 [2024-07-25 19:13:15.793648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27a8d00 is same with the state(5) to be set 00:22:23.359 [2024-07-25 19:13:15.793679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x292e670 (9): Bad file descriptor 00:22:23.359 [2024-07-25 19:13:15.793707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27acf00 (9): Bad file descriptor 00:22:23.359 [2024-07-25 19:13:15.793745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x277c830 (9): Bad file descriptor 00:22:23.359 [2024-07-25 19:13:15.793776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x278b360 (9): Bad file descriptor 00:22:23.359 [2024-07-25 19:13:15.793805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28622e0 (9): Bad file descriptor 00:22:23.359 [2024-07-25 19:13:15.793830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27a0b50 (9): Bad file descriptor 00:22:23.359 [2024-07-25 19:13:15.793854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x279f3a0 (9): Bad file descriptor 00:22:23.359 [2024-07-25 19:13:15.793877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27ac4a0 (9): Bad file descriptor 00:22:23.359 [2024-07-25 19:13:15.797888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:23.359 [2024-07-25 19:13:15.797946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:23.359 [2024-07-25 19:13:15.798624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:23.359 [2024-07-25 19:13:15.798671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2943460 (9): Bad file descriptor 00:22:23.359 [2024-07-25 19:13:15.798886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.359 [2024-07-25 19:13:15.798917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27a0b50 with addr=10.0.0.2, port=4420 00:22:23.359 [2024-07-25 19:13:15.798935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27a0b50 is same with the state(5) to be set 00:22:23.359 [2024-07-25 19:13:15.799198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.359 [2024-07-25 19:13:15.799225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x279f3a0 with addr=10.0.0.2, port=4420 00:22:23.359 [2024-07-25 19:13:15.799240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x279f3a0 is same with the state(5) to be set 00:22:23.359 [2024-07-25 19:13:15.800060] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:23.359 [2024-07-25 19:13:15.800169] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:23.359 [2024-07-25 19:13:15.800657] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:23.359 [2024-07-25 19:13:15.800716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27a0b50 (9): Bad file descriptor 00:22:23.359 [2024-07-25 19:13:15.800752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x279f3a0 (9): Bad file descriptor 00:22:23.359 [2024-07-25 19:13:15.800845] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:23.359 [2024-07-25 19:13:15.800955] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:23.359 [2024-07-25 19:13:15.801100] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:23.359 [2024-07-25 19:13:15.801205] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:23.359 [2024-07-25 19:13:15.801419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.359 [2024-07-25 19:13:15.801452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2943460 with addr=10.0.0.2, port=4420 00:22:23.359 [2024-07-25 19:13:15.801469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2943460 is same with the state(5) to be set 00:22:23.359 [2024-07-25 19:13:15.801484] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:23.359 [2024-07-25 19:13:15.801497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:23.359 [2024-07-25 19:13:15.801512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:23.359 [2024-07-25 19:13:15.801545] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:23.359 [2024-07-25 19:13:15.801560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:23.359 [2024-07-25 19:13:15.801573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:23.359 [2024-07-25 19:13:15.801727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:23.359 [2024-07-25 19:13:15.801752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:23.359 [2024-07-25 19:13:15.801769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2943460 (9): Bad file descriptor 00:22:23.359 [2024-07-25 19:13:15.801823] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:23.359 [2024-07-25 19:13:15.801839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:23.359 [2024-07-25 19:13:15.801852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:23.359 [2024-07-25 19:13:15.801908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:23.359 [2024-07-25 19:13:15.803242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27a8d00 (9): Bad file descriptor 00:22:23.359 [2024-07-25 19:13:15.803424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.359 [2024-07-25 19:13:15.803449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.359 [2024-07-25 19:13:15.803478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.359 [2024-07-25 19:13:15.803494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.359 [2024-07-25 19:13:15.803511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.359 [2024-07-25 19:13:15.803525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.359 [2024-07-25 19:13:15.803541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.359 [2024-07-25 19:13:15.803555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.359 [2024-07-25 19:13:15.803570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.359 [2024-07-25 19:13:15.803584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.359 [2024-07-25 19:13:15.803600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.359 [2024-07-25 19:13:15.803613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.359 [2024-07-25 19:13:15.803629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.359 [2024-07-25 19:13:15.803643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.359 [2024-07-25 19:13:15.803659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.359 [2024-07-25 19:13:15.803672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.359 [2024-07-25 19:13:15.803688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.359 [2024-07-25 19:13:15.803707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.359 [2024-07-25 19:13:15.803724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.359 [2024-07-25 19:13:15.803738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.803754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.803768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.803784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.803798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.803813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.803827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.803842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.803856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.803871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.803885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.803901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.803915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.803931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.803945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.803961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.803975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.803990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.804004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.804019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.804033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.804049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.804063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.804082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.804097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.804121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.804136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.804152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.804166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.804182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.804195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.804211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.804225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.804241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.804254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.804270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.804284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.804300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.804314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.804330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.804343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.804359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.804373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.804389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.804402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.804418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.804431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.804447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.804464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.804481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.804495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.804511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.804525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.804541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.804555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.804571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.804585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.804601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.804616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.804632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.804646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.804662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.804676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.360 [2024-07-25 19:13:15.804691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.360 [2024-07-25 19:13:15.804705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.804721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.804735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.804751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.804764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.804780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.804793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.804809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.804823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.804842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.804856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.804872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.804886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.804902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.804916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.804931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.804945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.804961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.804975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.804991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.805005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.805021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.805034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.805051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.805065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.805081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.805094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.805117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.805132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.805149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.805162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.805179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.805193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.805209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.805226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.805243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.805256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.805273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.805286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.805302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.805316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.805332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.805345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.805361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.805375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.805389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x288b1f0 is same with the state(5) to be set 00:22:23.361 [2024-07-25 19:13:15.806660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.806684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.806705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.806720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.806736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.806750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.806766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.806780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.806795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.806809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.806825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.806838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.806854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.806877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.806893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.806907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.806923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.806937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.806952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.361 [2024-07-25 19:13:15.806966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.361 [2024-07-25 19:13:15.806982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.626 [2024-07-25 19:13:15.806995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.626 [2024-07-25 19:13:15.807011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.626 [2024-07-25 19:13:15.807025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.626 [2024-07-25 19:13:15.807041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.626 [2024-07-25 19:13:15.807055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.626 [2024-07-25 19:13:15.807071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.626 [2024-07-25 19:13:15.807084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.626 [2024-07-25 19:13:15.807107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.626 [2024-07-25 19:13:15.807123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.626 [2024-07-25 19:13:15.807139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.626 [2024-07-25 19:13:15.807153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.626 [2024-07-25 19:13:15.807169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.626 [2024-07-25 19:13:15.807183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.626 [2024-07-25 19:13:15.807199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.626 [2024-07-25 19:13:15.807213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.626 [2024-07-25 19:13:15.807229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.626 [2024-07-25 19:13:15.807243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.626 [2024-07-25 19:13:15.807263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.626 [2024-07-25 19:13:15.807277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.626 [2024-07-25 19:13:15.807294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.626 [2024-07-25 19:13:15.807308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.626 [2024-07-25 19:13:15.807324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.626 [2024-07-25 19:13:15.807338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.626 [2024-07-25 19:13:15.807354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.626 [2024-07-25 19:13:15.807368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.626 [2024-07-25 19:13:15.807384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.626 [2024-07-25 19:13:15.807398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.626 [2024-07-25 19:13:15.807414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.626 [2024-07-25 19:13:15.807428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.626 [2024-07-25 19:13:15.807443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.626 [2024-07-25 19:13:15.807457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.626 [2024-07-25 19:13:15.807473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.807487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.807503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.807518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.807534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.807548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.807563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.807577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.807593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.807607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.807623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.807640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.807656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.807670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.807686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.807699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.807715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.807729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.807745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.807759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.807774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.807790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.807806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.807820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.807836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.807849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.807865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.807879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.807894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.807908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.807924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.807938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.807954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.807967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.807983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.807997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.808016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.808030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.808047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.808060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.808076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.808089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.808116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.808133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.808149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.808162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.808178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.808192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.808208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.808222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.808238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.808252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.808268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.808282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.808298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.808311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.808327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.808340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.808356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.808370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.808385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.808403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.808420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.808433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.808449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.808464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.808479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.808493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.808508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.808522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.808538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.808552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.808568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.808581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.808597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.808611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.627 [2024-07-25 19:13:15.808626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x290af50 is same with the state(5) to be set 00:22:23.627 [2024-07-25 19:13:15.809868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.627 [2024-07-25 19:13:15.809891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.809913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.809928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.809944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.809958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.809974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.809988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.810976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.810990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.811006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.811019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.811035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.811048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.811064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.628 [2024-07-25 19:13:15.811078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.628 [2024-07-25 19:13:15.811094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.811114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.811131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.811145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.811161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.811178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.811195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.811209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.811224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.811239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.811255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.811268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.811284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.811297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.811313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.811327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.811342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.811356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.811372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.811385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.811401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.811414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.811430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.811443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.811459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.811473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.811489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.811502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.811518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.811531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.811550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.811564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.811580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.811594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.811610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.811623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.811640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.811653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.811669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.811682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.811698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.811712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.811728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.811741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.811757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.811771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.811787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.811801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.811815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2777100 is same with the state(5) to be set 00:22:23.629 [2024-07-25 19:13:15.813050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.813073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.813095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.813122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.813139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.813153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.813174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.813188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.813204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.813218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.813234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.813248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.813264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.813279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.813295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.813308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.813324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.813338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.813353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.813367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.813383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.813397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.813413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.813426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.813442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.813456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.813472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.813486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.629 [2024-07-25 19:13:15.813501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.629 [2024-07-25 19:13:15.813515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.813531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.813548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.813564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.813578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.813594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.813608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.813624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.813638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.813654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.813668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.813683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.813697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.813713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.813727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.813742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.813756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.813772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.813785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.813801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.813815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.813831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.813844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.813860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.813874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.813889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.813903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.813924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.813938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.813954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.813967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.813983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.813997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.814013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.814026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.814043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.814057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.814081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.814094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.814118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.814133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.814149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.814162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.814178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.814192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.814208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.814221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.814237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.814250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.814266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.814280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.814295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.814312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.814328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.814342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.814359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.814374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.814389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.814403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.814419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.814433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.814448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.814462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.814477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.814491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.814507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.814520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.814536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.814549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.814565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.814578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.814594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.814608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.814624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.814637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.814653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.814667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.630 [2024-07-25 19:13:15.814687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.630 [2024-07-25 19:13:15.814701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.814717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.814730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.814746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.814760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.814775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.814789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.814804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.814818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.814833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.814847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.814862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.814876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.814892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.814906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.814921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.814934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.814950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.814963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.814979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.814993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.815007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27785b0 is same with the state(5) to be set 00:22:23.631 [2024-07-25 19:13:15.816245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.816268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.816298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.816314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.816330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.816343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.816360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.816373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.816389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.816403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.816418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.816432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.816448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.816461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.816477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.816490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.816505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.816519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.816535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.816548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.816564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.816578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.816593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.816607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.816623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.816636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.816652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.816669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.816685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.816699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.816715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.816728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.816744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.816758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.816774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.816787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.816803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.816816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.816832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.816846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.816863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.816876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.631 [2024-07-25 19:13:15.816892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.631 [2024-07-25 19:13:15.816905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.816921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.816935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.816951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.816964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.816981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.816994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.817971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.817984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.818000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.818013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.818029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.632 [2024-07-25 19:13:15.818043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.632 [2024-07-25 19:13:15.818059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.818072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.818088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.818107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.818124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.818138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.818154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.818172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.818186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x30a9b80 is same with the state(5) to be set 00:22:23.633 [2024-07-25 19:13:15.819434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.819457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.819481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.819497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.819513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.819527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.819544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.819558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.819574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.819587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.819603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.819617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.819632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.819646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.819661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.819675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.819690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.819704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.819720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.819733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.819749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.819762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.819778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.819791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.819812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.819826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.819842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.819856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.819871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.819885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.819901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.819914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.819930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.819944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.819960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.819973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.819989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.820003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.820019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.820033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.820049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.820063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.820079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.820093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.820117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.820132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.820148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.820162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.820178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.820196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.820212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.820226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.820242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.820256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.820272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.820285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.820301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.820315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.820331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.820344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.820360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.820374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.820390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.820403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.820419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.820432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.820448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.820462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.820478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.633 [2024-07-25 19:13:15.820492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.633 [2024-07-25 19:13:15.820508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.820521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.820537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.820550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.820570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.820584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.820600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.820613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.820629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.820643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.820658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.820672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.820688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.820701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.820717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.820730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.820746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.820760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.820775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.820789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.820805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.820819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.820834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.820848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.820864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.820877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.820893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.820906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.820922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.820939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.820955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.820969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.820985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.820998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.821014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.821028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.821043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.821057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.821073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.821086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.821112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.821128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.821144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.821158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.821173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.821187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.821203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.821216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.821232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.821246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.821261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.821274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.821290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.821304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.821323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.821337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.821352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.821366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.821380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2884740 is same with the state(5) to be set 00:22:23.634 [2024-07-25 19:13:15.823738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:23.634 [2024-07-25 19:13:15.823772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:23.634 [2024-07-25 19:13:15.823791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:23.634 [2024-07-25 19:13:15.823808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:23.634 [2024-07-25 19:13:15.823925] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:23.634 [2024-07-25 19:13:15.823955] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:23.634 [2024-07-25 19:13:15.824077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:23.634 [2024-07-25 19:13:15.824116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:23.634 [2024-07-25 19:13:15.824371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.634 [2024-07-25 19:13:15.824401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x277c830 with addr=10.0.0.2, port=4420 00:22:23.634 [2024-07-25 19:13:15.824418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x277c830 is same with the state(5) to be set 00:22:23.634 [2024-07-25 19:13:15.824570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.634 [2024-07-25 19:13:15.824594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x278b360 with addr=10.0.0.2, port=4420 00:22:23.634 [2024-07-25 19:13:15.824609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x278b360 is same with the state(5) to be set 00:22:23.634 [2024-07-25 19:13:15.824755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.634 [2024-07-25 19:13:15.824779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27acf00 with addr=10.0.0.2, port=4420 00:22:23.634 [2024-07-25 19:13:15.824794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27acf00 is same with the state(5) to be set 00:22:23.634 [2024-07-25 19:13:15.824945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.634 [2024-07-25 19:13:15.824969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27ac4a0 with addr=10.0.0.2, port=4420 00:22:23.634 [2024-07-25 19:13:15.824984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ac4a0 is same with the state(5) to be set 00:22:23.634 [2024-07-25 19:13:15.826414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.826437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.634 [2024-07-25 19:13:15.826463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.634 [2024-07-25 19:13:15.826486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.826504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.826518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.826534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.826548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.826573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.826588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.826604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.826618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.826634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.826647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.826663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.826677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.826693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.826707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.826723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.826737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.826753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.826767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.826783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.826796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.826812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.826826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.826842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.826855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.826875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.826890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.826905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.826919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.826935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.826949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.826965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.826979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.826995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.827009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.827025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.827038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.827054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.827067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.827083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.827096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.827123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.827138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.827154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.827168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.827183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.827197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.827213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.827226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.827242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.827259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.827276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.827290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.827306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.827319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.827335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.827349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.827364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.827378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.827394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.827408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.827423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.827437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.827453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.635 [2024-07-25 19:13:15.827467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.635 [2024-07-25 19:13:15.827483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.827496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.827513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.827527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.827543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.827557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.827573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.827587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.827602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.827616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.827631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.827648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.827665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.827678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.827694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.827708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.827730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.827744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.827761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.827775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.827791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.827804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.827820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.827834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.827850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.827864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.827879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.827894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.827910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.827923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.827940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.827962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.827977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.827991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.828006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.828020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.828039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.828053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.828070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.828083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.828100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.828121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.828138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.828151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.828168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.828182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.828197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.828211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.828227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.828241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.828256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.828270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.828286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.828300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.828316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.828329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.828345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.828359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.828374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.636 [2024-07-25 19:13:15.828387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.636 [2024-07-25 19:13:15.828402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x32515f0 is same with the state(5) to be set 00:22:23.636 [2024-07-25 19:13:15.831014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:23.636 [2024-07-25 19:13:15.831049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:23.636 [2024-07-25 19:13:15.831067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:23.636 task offset: 16384 on job bdev=Nvme2n1 fails 00:22:23.636 00:22:23.636 Latency(us) 00:22:23.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.636 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.636 Job: Nvme1n1 ended in about 0.88 seconds with error 00:22:23.636 Verification LBA range: start 0x0 length 0x400 00:22:23.636 Nvme1n1 : 0.88 145.10 9.07 72.55 0.00 290698.68 25631.86 271853.04 00:22:23.636 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.636 Job: Nvme2n1 ended in about 0.87 seconds with error 00:22:23.636 Verification LBA range: start 0x0 length 0x400 00:22:23.636 Nvme2n1 : 0.87 146.98 9.19 73.49 0.00 280724.42 24563.86 329330.54 00:22:23.636 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.636 Job: Nvme3n1 ended in about 0.89 seconds with error 00:22:23.636 Verification LBA range: start 0x0 length 0x400 00:22:23.636 Nvme3n1 : 0.89 216.86 13.55 72.29 0.00 209549.46 18835.53 237677.23 00:22:23.636 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.636 Job: Nvme4n1 ended in about 0.87 seconds with error 00:22:23.636 Verification LBA range: start 0x0 length 0x400 00:22:23.636 Nvme4n1 : 0.87 220.18 13.76 73.39 0.00 201652.15 18738.44 246997.90 00:22:23.636 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.636 Job: Nvme5n1 ended in about 0.89 seconds with error 00:22:23.636 Verification LBA range: start 0x0 length 0x400 00:22:23.636 Nvme5n1 : 0.89 144.05 9.00 72.03 0.00 268521.62 24369.68 267192.70 00:22:23.636 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.636 Job: Nvme6n1 ended in about 0.89 seconds with error 00:22:23.636 Verification LBA range: start 0x0 length 0x400 00:22:23.636 Nvme6n1 : 0.89 143.54 8.97 71.77 0.00 263697.45 28932.93 250104.79 00:22:23.636 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.637 Job: Nvme7n1 ended in about 0.89 seconds with error 00:22:23.637 Verification LBA range: start 0x0 length 0x400 00:22:23.637 Nvme7n1 : 0.89 143.03 8.94 71.52 0.00 258843.18 21359.88 278066.82 00:22:23.637 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.637 Job: Nvme8n1 ended in about 0.91 seconds with error 00:22:23.637 Verification LBA range: start 0x0 length 0x400 00:22:23.637 Nvme8n1 : 0.91 141.42 8.84 70.71 0.00 256339.56 18544.26 279620.27 00:22:23.637 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.637 Job: Nvme9n1 ended in about 0.87 seconds with error 00:22:23.637 Verification LBA range: start 0x0 length 0x400 00:22:23.637 Nvme9n1 : 0.87 146.55 9.16 73.28 0.00 239905.50 12281.93 287387.50 00:22:23.637 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.637 Job: Nvme10n1 ended in about 0.90 seconds with error 00:22:23.637 Verification LBA range: start 0x0 length 0x400 00:22:23.637 Nvme10n1 : 0.90 71.26 4.45 71.26 0.00 363659.57 31263.10 330883.98 00:22:23.637 =================================================================================================================== 00:22:23.637 Total : 1518.98 94.94 722.28 0.00 256397.32 12281.93 330883.98 00:22:23.637 [2024-07-25 19:13:15.859317] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:23.637 [2024-07-25 19:13:15.859389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:23.637 [2024-07-25 19:13:15.859696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.637 [2024-07-25 19:13:15.859731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28622e0 with addr=10.0.0.2, port=4420 00:22:23.637 [2024-07-25 19:13:15.859750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28622e0 is same with the state(5) to be set 00:22:23.637 [2024-07-25 19:13:15.859892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.637 [2024-07-25 19:13:15.859918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x292e670 with addr=10.0.0.2, port=4420 00:22:23.637 [2024-07-25 19:13:15.859934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x292e670 is same with the state(5) to be set 00:22:23.637 [2024-07-25 19:13:15.859959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x277c830 (9): Bad file descriptor 00:22:23.637 [2024-07-25 19:13:15.859981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x278b360 (9): Bad file descriptor 00:22:23.637 [2024-07-25 19:13:15.859999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27acf00 (9): Bad file descriptor 00:22:23.637 [2024-07-25 19:13:15.860017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27ac4a0 (9): Bad file descriptor 00:22:23.637 [2024-07-25 19:13:15.860499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.637 [2024-07-25 19:13:15.860528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x279f3a0 with addr=10.0.0.2, port=4420 00:22:23.637 [2024-07-25 19:13:15.860544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x279f3a0 is same with the state(5) to be set 00:22:23.637 [2024-07-25 19:13:15.860708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.637 [2024-07-25 19:13:15.860733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27a0b50 with addr=10.0.0.2, port=4420 00:22:23.637 [2024-07-25 19:13:15.860755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27a0b50 is same with the state(5) to be set 00:22:23.637 [2024-07-25 19:13:15.860894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.637 [2024-07-25 19:13:15.860919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2943460 with addr=10.0.0.2, port=4420 00:22:23.637 [2024-07-25 19:13:15.860934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2943460 is same with the state(5) to be set 00:22:23.637 [2024-07-25 19:13:15.861087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.637 [2024-07-25 19:13:15.861121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27a8d00 with addr=10.0.0.2, port=4420 00:22:23.637 [2024-07-25 19:13:15.861138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27a8d00 is same with the state(5) to be set 00:22:23.637 [2024-07-25 19:13:15.861156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28622e0 (9): Bad file descriptor 00:22:23.637 [2024-07-25 19:13:15.861175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x292e670 (9): Bad file descriptor 00:22:23.637 [2024-07-25 19:13:15.861190] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:23.637 [2024-07-25 19:13:15.861203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:23.637 [2024-07-25 19:13:15.861218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:23.637 [2024-07-25 19:13:15.861239] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:23.637 [2024-07-25 19:13:15.861252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:23.637 [2024-07-25 19:13:15.861270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:23.637 [2024-07-25 19:13:15.861287] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:23.637 [2024-07-25 19:13:15.861301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:23.637 [2024-07-25 19:13:15.861314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:23.637 [2024-07-25 19:13:15.861330] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:23.637 [2024-07-25 19:13:15.861343] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:23.637 [2024-07-25 19:13:15.861355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:23.637 [2024-07-25 19:13:15.861386] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:23.637 [2024-07-25 19:13:15.861406] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:23.637 [2024-07-25 19:13:15.861425] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:23.637 [2024-07-25 19:13:15.861442] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:23.637 [2024-07-25 19:13:15.861461] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:23.637 [2024-07-25 19:13:15.861479] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:23.637 [2024-07-25 19:13:15.861867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:23.637 [2024-07-25 19:13:15.861894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:23.637 [2024-07-25 19:13:15.861907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:23.637 [2024-07-25 19:13:15.861918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:23.637 [2024-07-25 19:13:15.861933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x279f3a0 (9): Bad file descriptor 00:22:23.637 [2024-07-25 19:13:15.861952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27a0b50 (9): Bad file descriptor 00:22:23.637 [2024-07-25 19:13:15.861969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2943460 (9): Bad file descriptor 00:22:23.637 [2024-07-25 19:13:15.861986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27a8d00 (9): Bad file descriptor 00:22:23.637 [2024-07-25 19:13:15.862001] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:23.637 [2024-07-25 19:13:15.862014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:23.637 [2024-07-25 19:13:15.862026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:23.637 [2024-07-25 19:13:15.862043] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:23.637 [2024-07-25 19:13:15.862057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:23.637 [2024-07-25 19:13:15.862070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:23.637 [2024-07-25 19:13:15.862140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:23.637 [2024-07-25 19:13:15.862159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:23.637 [2024-07-25 19:13:15.862171] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:23.637 [2024-07-25 19:13:15.862183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:23.637 [2024-07-25 19:13:15.862202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:23.637 [2024-07-25 19:13:15.862218] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:23.637 [2024-07-25 19:13:15.862232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:23.637 [2024-07-25 19:13:15.862244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:23.637 [2024-07-25 19:13:15.862260] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:23.637 [2024-07-25 19:13:15.862273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:23.637 [2024-07-25 19:13:15.862286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:23.637 [2024-07-25 19:13:15.862302] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:23.637 [2024-07-25 19:13:15.862315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:23.637 [2024-07-25 19:13:15.862327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:23.637 [2024-07-25 19:13:15.862373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:23.637 [2024-07-25 19:13:15.862392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:23.637 [2024-07-25 19:13:15.862404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:23.637 [2024-07-25 19:13:15.862415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:23.896 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:22:23.896 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:22:25.301 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 951193 00:22:25.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (951193) - No such process 00:22:25.301 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:22:25.301 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:22:25.301 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:25.301 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:25.301 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:25.301 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:25.301 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:25.301 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:22:25.301 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:25.301 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:22:25.301 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:25.301 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:25.301 rmmod nvme_tcp 00:22:25.301 rmmod nvme_fabrics 00:22:25.301 rmmod nvme_keyring 00:22:25.301 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:25.301 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:22:25.301 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:22:25.301 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:25.301 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:25.301 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:25.301 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:25.301 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:25.301 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:25.301 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.301 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:25.301 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.202 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:27.202 00:22:27.202 real 0m8.069s 00:22:27.202 user 0m20.163s 00:22:27.202 sys 0m1.599s 00:22:27.202 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:27.202 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:27.202 ************************************ 00:22:27.202 END TEST nvmf_shutdown_tc3 00:22:27.202 ************************************ 00:22:27.202 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:22:27.202 00:22:27.202 real 0m29.586s 00:22:27.202 user 1m22.994s 00:22:27.202 sys 0m6.893s 00:22:27.202 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:27.202 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:27.202 ************************************ 00:22:27.202 END TEST nvmf_shutdown 00:22:27.202 ************************************ 00:22:27.202 19:13:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:22:27.202 00:22:27.202 real 10m55.667s 00:22:27.202 user 25m42.195s 00:22:27.202 sys 2m47.722s 00:22:27.202 19:13:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:27.202 19:13:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:27.202 ************************************ 00:22:27.202 END TEST nvmf_target_extra 00:22:27.202 ************************************ 00:22:27.202 19:13:19 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:27.202 19:13:19 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:27.202 19:13:19 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:27.202 19:13:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:27.202 ************************************ 00:22:27.202 START TEST nvmf_host 00:22:27.202 ************************************ 00:22:27.202 19:13:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:27.202 * Looking for test storage... 00:22:27.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.203 ************************************ 00:22:27.203 START TEST nvmf_multicontroller 00:22:27.203 ************************************ 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:27.203 * Looking for test storage... 00:22:27.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:27.203 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:27.462 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:27.463 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:27.463 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:27.463 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:27.463 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:27.463 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:27.463 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:27.463 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:27.463 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.463 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:27.463 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.463 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:27.463 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:27.463 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:22:27.463 19:13:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:29.994 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:29.994 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:29.994 Found net devices under 0000:09:00.0: cvl_0_0 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:29.994 Found net devices under 0000:09:00.1: cvl_0_1 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:29.994 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:29.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:22:29.995 00:22:29.995 --- 10.0.0.2 ping statistics --- 00:22:29.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.995 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:29.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:22:29.995 00:22:29.995 --- 10.0.0.1 ping statistics --- 00:22:29.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.995 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=954037 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 954037 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 954037 ']' 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:29.995 19:13:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:29.995 [2024-07-25 19:13:22.375818] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:29.995 [2024-07-25 19:13:22.375910] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.995 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.995 [2024-07-25 19:13:22.451712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:30.253 [2024-07-25 19:13:22.561370] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.253 [2024-07-25 19:13:22.561462] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.253 [2024-07-25 19:13:22.561490] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.253 [2024-07-25 19:13:22.561502] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.253 [2024-07-25 19:13:22.561512] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.253 [2024-07-25 19:13:22.561595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.253 [2024-07-25 19:13:22.561661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:30.253 [2024-07-25 19:13:22.561664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.185 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:31.185 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:31.185 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:31.185 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:31.185 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.185 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.185 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:31.185 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.185 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.185 [2024-07-25 19:13:23.349203] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.185 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.185 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:31.185 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.185 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.185 Malloc0 00:22:31.185 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.185 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:31.185 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.185 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.185 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.185 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:31.185 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.185 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.185 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.185 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:31.185 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.185 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.185 [2024-07-25 19:13:23.416921] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.185 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.186 [2024-07-25 19:13:23.424837] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.186 Malloc1 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=954190 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 954190 /var/tmp/bdevperf.sock 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 954190 ']' 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:31.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:31.186 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.443 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:31.443 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:31.443 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:31.443 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.443 19:13:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.701 NVMe0n1 00:22:31.701 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.701 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:31.701 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:31.701 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.701 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.701 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.701 1 00:22:31.701 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:31.701 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:31.701 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:31.701 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:31.701 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:31.701 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:31.701 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.702 request: 00:22:31.702 { 00:22:31.702 "name": "NVMe0", 00:22:31.702 "trtype": "tcp", 00:22:31.702 "traddr": "10.0.0.2", 00:22:31.702 "adrfam": "ipv4", 00:22:31.702 "trsvcid": "4420", 00:22:31.702 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.702 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:31.702 "hostaddr": "10.0.0.2", 00:22:31.702 "hostsvcid": "60000", 00:22:31.702 "prchk_reftag": false, 00:22:31.702 "prchk_guard": false, 00:22:31.702 "hdgst": false, 00:22:31.702 "ddgst": false, 00:22:31.702 "method": "bdev_nvme_attach_controller", 00:22:31.702 "req_id": 1 00:22:31.702 } 00:22:31.702 Got JSON-RPC error response 00:22:31.702 response: 00:22:31.702 { 00:22:31.702 "code": -114, 00:22:31.702 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:31.702 } 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.702 request: 00:22:31.702 { 00:22:31.702 "name": "NVMe0", 00:22:31.702 "trtype": "tcp", 00:22:31.702 "traddr": "10.0.0.2", 00:22:31.702 "adrfam": "ipv4", 00:22:31.702 "trsvcid": "4420", 00:22:31.702 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:31.702 "hostaddr": "10.0.0.2", 00:22:31.702 "hostsvcid": "60000", 00:22:31.702 "prchk_reftag": false, 00:22:31.702 "prchk_guard": false, 00:22:31.702 "hdgst": false, 00:22:31.702 "ddgst": false, 00:22:31.702 "method": "bdev_nvme_attach_controller", 00:22:31.702 "req_id": 1 00:22:31.702 } 00:22:31.702 Got JSON-RPC error response 00:22:31.702 response: 00:22:31.702 { 00:22:31.702 "code": -114, 00:22:31.702 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:31.702 } 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.702 request: 00:22:31.702 { 00:22:31.702 "name": "NVMe0", 00:22:31.702 "trtype": "tcp", 00:22:31.702 "traddr": "10.0.0.2", 00:22:31.702 "adrfam": "ipv4", 00:22:31.702 "trsvcid": "4420", 00:22:31.702 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.702 "hostaddr": "10.0.0.2", 00:22:31.702 "hostsvcid": "60000", 00:22:31.702 "prchk_reftag": false, 00:22:31.702 "prchk_guard": false, 00:22:31.702 "hdgst": false, 00:22:31.702 "ddgst": false, 00:22:31.702 "multipath": "disable", 00:22:31.702 "method": "bdev_nvme_attach_controller", 00:22:31.702 "req_id": 1 00:22:31.702 } 00:22:31.702 Got JSON-RPC error response 00:22:31.702 response: 00:22:31.702 { 00:22:31.702 "code": -114, 00:22:31.702 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:22:31.702 } 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.702 request: 00:22:31.702 { 00:22:31.702 "name": "NVMe0", 00:22:31.702 "trtype": "tcp", 00:22:31.702 "traddr": "10.0.0.2", 00:22:31.702 "adrfam": "ipv4", 00:22:31.702 "trsvcid": "4420", 00:22:31.702 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.702 "hostaddr": "10.0.0.2", 00:22:31.702 "hostsvcid": "60000", 00:22:31.702 "prchk_reftag": false, 00:22:31.702 "prchk_guard": false, 00:22:31.702 "hdgst": false, 00:22:31.702 "ddgst": false, 00:22:31.702 "multipath": "failover", 00:22:31.702 "method": "bdev_nvme_attach_controller", 00:22:31.702 "req_id": 1 00:22:31.702 } 00:22:31.702 Got JSON-RPC error response 00:22:31.702 response: 00:22:31.702 { 00:22:31.702 "code": -114, 00:22:31.702 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:31.702 } 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.702 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.702 00:22:31.960 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.960 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:31.960 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.960 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.960 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.960 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:31.960 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.960 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.960 00:22:31.960 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.960 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:31.960 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:31.960 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.960 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.960 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.960 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:31.960 19:13:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:33.335 0 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 954190 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 954190 ']' 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 954190 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 954190 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 954190' 00:22:33.335 killing process with pid 954190 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 954190 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 954190 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:22:33.335 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:22:33.335 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:33.335 [2024-07-25 19:13:23.522469] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:33.335 [2024-07-25 19:13:23.522559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid954190 ] 00:22:33.335 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.335 [2024-07-25 19:13:23.594575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.335 [2024-07-25 19:13:23.706315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.335 [2024-07-25 19:13:24.285231] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name d68be691-eaa1-4d73-9377-e4f8ea3dcc6b already exists 00:22:33.335 [2024-07-25 19:13:24.285272] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:d68be691-eaa1-4d73-9377-e4f8ea3dcc6b alias for bdev NVMe1n1 00:22:33.335 [2024-07-25 19:13:24.285287] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:33.335 Running I/O for 1 seconds... 00:22:33.335 00:22:33.335 Latency(us) 00:22:33.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.335 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:33.335 NVMe0n1 : 1.00 19205.40 75.02 0.00 0.00 6654.15 2305.90 11699.39 00:22:33.335 =================================================================================================================== 00:22:33.335 Total : 19205.40 75.02 0.00 0.00 6654.15 2305.90 11699.39 00:22:33.335 Received shutdown signal, test time was about 1.000000 seconds 00:22:33.335 00:22:33.335 Latency(us) 00:22:33.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.335 =================================================================================================================== 00:22:33.335 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:33.335 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:33.336 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:33.336 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:33.336 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:33.336 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:33.336 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:22:33.336 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:33.336 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:22:33.336 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:33.336 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:33.336 rmmod nvme_tcp 00:22:33.594 rmmod nvme_fabrics 00:22:33.594 rmmod nvme_keyring 00:22:33.594 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:33.594 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:22:33.594 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:22:33.594 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 954037 ']' 00:22:33.594 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 954037 00:22:33.594 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 954037 ']' 00:22:33.594 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 954037 00:22:33.594 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:22:33.594 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:33.594 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 954037 00:22:33.594 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:33.594 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:33.594 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 954037' 00:22:33.594 killing process with pid 954037 00:22:33.594 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 954037 00:22:33.594 19:13:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 954037 00:22:33.854 19:13:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:33.854 19:13:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:33.854 19:13:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:33.854 19:13:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:33.854 19:13:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:33.854 19:13:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.854 19:13:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.854 19:13:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:36.388 00:22:36.388 real 0m8.636s 00:22:36.388 user 0m14.139s 00:22:36.388 sys 0m2.679s 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:36.388 ************************************ 00:22:36.388 END TEST nvmf_multicontroller 00:22:36.388 ************************************ 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.388 ************************************ 00:22:36.388 START TEST nvmf_aer 00:22:36.388 ************************************ 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:36.388 * Looking for test storage... 00:22:36.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.388 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.389 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.389 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:36.389 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.389 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:22:36.389 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:36.389 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:36.389 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:36.389 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.389 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.389 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:36.389 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:36.389 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:36.389 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:36.389 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:36.389 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.389 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:36.389 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:36.389 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:36.389 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.389 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.389 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.389 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:36.389 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:36.389 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:22:36.389 19:13:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:38.291 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:38.291 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:38.291 Found net devices under 0000:09:00.0: cvl_0_0 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:38.291 Found net devices under 0000:09:00.1: cvl_0_1 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:38.291 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.550 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.550 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.550 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:38.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:22:38.550 00:22:38.550 --- 10.0.0.2 ping statistics --- 00:22:38.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.550 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:22:38.550 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:22:38.550 00:22:38.550 --- 10.0.0.1 ping statistics --- 00:22:38.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.550 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:22:38.550 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.550 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:22:38.550 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:38.551 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.551 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:38.551 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:38.551 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.551 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:38.551 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:38.551 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:38.551 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:38.551 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:38.551 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.551 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=956808 00:22:38.551 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 956808 00:22:38.551 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 956808 ']' 00:22:38.551 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.551 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:38.551 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.551 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:38.551 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:38.551 19:13:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.551 [2024-07-25 19:13:30.872979] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:38.551 [2024-07-25 19:13:30.873077] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.551 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.551 [2024-07-25 19:13:30.950822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:38.809 [2024-07-25 19:13:31.070417] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.809 [2024-07-25 19:13:31.070474] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.809 [2024-07-25 19:13:31.070500] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.809 [2024-07-25 19:13:31.070521] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.809 [2024-07-25 19:13:31.070539] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.809 [2024-07-25 19:13:31.070613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.809 [2024-07-25 19:13:31.070684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.809 [2024-07-25 19:13:31.070706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:38.809 [2024-07-25 19:13:31.070712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.374 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:39.374 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:22:39.374 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:39.374 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:39.374 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:39.374 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.374 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:39.374 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.374 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:39.374 [2024-07-25 19:13:31.844755] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.632 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.632 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:39.632 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.632 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:39.632 Malloc0 00:22:39.632 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.632 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:39.632 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.632 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:39.632 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.632 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:39.632 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.632 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:39.632 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.632 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:39.632 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.632 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:39.632 [2024-07-25 19:13:31.896275] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.632 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.632 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:39.632 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.632 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:39.632 [ 00:22:39.632 { 00:22:39.632 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:39.632 "subtype": "Discovery", 00:22:39.632 "listen_addresses": [], 00:22:39.632 "allow_any_host": true, 00:22:39.632 "hosts": [] 00:22:39.632 }, 00:22:39.632 { 00:22:39.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.632 "subtype": "NVMe", 00:22:39.632 "listen_addresses": [ 00:22:39.632 { 00:22:39.632 "trtype": "TCP", 00:22:39.632 "adrfam": "IPv4", 00:22:39.632 "traddr": "10.0.0.2", 00:22:39.632 "trsvcid": "4420" 00:22:39.632 } 00:22:39.632 ], 00:22:39.632 "allow_any_host": true, 00:22:39.633 "hosts": [], 00:22:39.633 "serial_number": "SPDK00000000000001", 00:22:39.633 "model_number": "SPDK bdev Controller", 00:22:39.633 "max_namespaces": 2, 00:22:39.633 "min_cntlid": 1, 00:22:39.633 "max_cntlid": 65519, 00:22:39.633 "namespaces": [ 00:22:39.633 { 00:22:39.633 "nsid": 1, 00:22:39.633 "bdev_name": "Malloc0", 00:22:39.633 "name": "Malloc0", 00:22:39.633 "nguid": "85180267C986467887873E66FB18A0FC", 00:22:39.633 "uuid": "85180267-c986-4678-8787-3e66fb18a0fc" 00:22:39.633 } 00:22:39.633 ] 00:22:39.633 } 00:22:39.633 ] 00:22:39.633 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.633 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:39.633 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:39.633 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=956964 00:22:39.633 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:39.633 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:39.633 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:22:39.633 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:39.633 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:22:39.633 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:22:39.633 19:13:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:39.633 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.633 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:39.633 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:22:39.633 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:22:39.633 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:39.891 Malloc1 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:39.891 [ 00:22:39.891 { 00:22:39.891 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:39.891 "subtype": "Discovery", 00:22:39.891 "listen_addresses": [], 00:22:39.891 "allow_any_host": true, 00:22:39.891 "hosts": [] 00:22:39.891 }, 00:22:39.891 { 00:22:39.891 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.891 "subtype": "NVMe", 00:22:39.891 "listen_addresses": [ 00:22:39.891 { 00:22:39.891 "trtype": "TCP", 00:22:39.891 "adrfam": "IPv4", 00:22:39.891 "traddr": "10.0.0.2", 00:22:39.891 "trsvcid": "4420" 00:22:39.891 } 00:22:39.891 ], 00:22:39.891 "allow_any_host": true, 00:22:39.891 "hosts": [], 00:22:39.891 "serial_number": "SPDK00000000000001", 00:22:39.891 "model_number": "SPDK bdev Controller", 00:22:39.891 "max_namespaces": 2, 00:22:39.891 "min_cntlid": 1, 00:22:39.891 "max_cntlid": 65519, 00:22:39.891 "namespaces": [ 00:22:39.891 { 00:22:39.891 "nsid": 1, 00:22:39.891 "bdev_name": "Malloc0", 00:22:39.891 "name": "Malloc0", 00:22:39.891 "nguid": "85180267C986467887873E66FB18A0FC", 00:22:39.891 "uuid": "85180267-c986-4678-8787-3e66fb18a0fc" 00:22:39.891 }, 00:22:39.891 { 00:22:39.891 "nsid": 2, 00:22:39.891 "bdev_name": "Malloc1", 00:22:39.891 "name": "Malloc1", 00:22:39.891 "nguid": "DFB42F61FA7947F69A48E81F836B0C0A", 00:22:39.891 "uuid": "dfb42f61-fa79-47f6-9a48-e81f836b0c0a" 00:22:39.891 } 00:22:39.891 ] 00:22:39.891 } 00:22:39.891 ] 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 956964 00:22:39.891 Asynchronous Event Request test 00:22:39.891 Attaching to 10.0.0.2 00:22:39.891 Attached to 10.0.0.2 00:22:39.891 Registering asynchronous event callbacks... 00:22:39.891 Starting namespace attribute notice tests for all controllers... 00:22:39.891 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:39.891 aer_cb - Changed Namespace 00:22:39.891 Cleaning up... 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:39.891 rmmod nvme_tcp 00:22:39.891 rmmod nvme_fabrics 00:22:39.891 rmmod nvme_keyring 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 956808 ']' 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 956808 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 956808 ']' 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 956808 00:22:39.891 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:22:39.892 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:39.892 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 956808 00:22:39.892 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:39.892 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:39.892 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 956808' 00:22:39.892 killing process with pid 956808 00:22:39.892 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 956808 00:22:39.892 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 956808 00:22:40.459 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:40.459 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:40.459 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:40.459 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:40.459 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:40.459 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.459 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.459 19:13:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.360 19:13:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:42.360 00:22:42.360 real 0m6.358s 00:22:42.360 user 0m7.062s 00:22:42.360 sys 0m2.105s 00:22:42.360 19:13:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:42.360 19:13:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:42.360 ************************************ 00:22:42.360 END TEST nvmf_aer 00:22:42.360 ************************************ 00:22:42.360 19:13:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:42.360 19:13:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:42.360 19:13:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:42.360 19:13:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.360 ************************************ 00:22:42.360 START TEST nvmf_async_init 00:22:42.360 ************************************ 00:22:42.360 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:42.360 * Looking for test storage... 00:22:42.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:42.360 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:42.360 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:42.360 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.360 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.360 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.360 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.360 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.360 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.360 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.360 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.360 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.360 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.360 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:42.360 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=a7a78a1a4f69419c9fbca933017c5402 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:22:42.361 19:13:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:44.892 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:44.892 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.892 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:44.893 Found net devices under 0000:09:00.0: cvl_0_0 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:44.893 Found net devices under 0000:09:00.1: cvl_0_1 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:44.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:22:44.893 00:22:44.893 --- 10.0.0.2 ping statistics --- 00:22:44.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.893 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:44.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:22:44.893 00:22:44.893 --- 10.0.0.1 ping statistics --- 00:22:44.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.893 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=959313 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 959313 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 959313 ']' 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:44.893 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.151 [2024-07-25 19:13:37.394312] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:45.151 [2024-07-25 19:13:37.394382] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.151 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.151 [2024-07-25 19:13:37.465508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.151 [2024-07-25 19:13:37.570729] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.151 [2024-07-25 19:13:37.570784] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.151 [2024-07-25 19:13:37.570804] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.151 [2024-07-25 19:13:37.570821] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.151 [2024-07-25 19:13:37.570836] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.151 [2024-07-25 19:13:37.570892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.409 [2024-07-25 19:13:37.708522] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.409 null0 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a7a78a1a4f69419c9fbca933017c5402 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.409 [2024-07-25 19:13:37.748774] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.409 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.667 nvme0n1 00:22:45.667 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.667 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:45.667 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.667 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.667 [ 00:22:45.667 { 00:22:45.667 "name": "nvme0n1", 00:22:45.667 "aliases": [ 00:22:45.667 "a7a78a1a-4f69-419c-9fbc-a933017c5402" 00:22:45.667 ], 00:22:45.667 "product_name": "NVMe disk", 00:22:45.667 "block_size": 512, 00:22:45.667 "num_blocks": 2097152, 00:22:45.667 "uuid": "a7a78a1a-4f69-419c-9fbc-a933017c5402", 00:22:45.667 "assigned_rate_limits": { 00:22:45.667 "rw_ios_per_sec": 0, 00:22:45.667 "rw_mbytes_per_sec": 0, 00:22:45.667 "r_mbytes_per_sec": 0, 00:22:45.667 "w_mbytes_per_sec": 0 00:22:45.667 }, 00:22:45.667 "claimed": false, 00:22:45.667 "zoned": false, 00:22:45.667 "supported_io_types": { 00:22:45.667 "read": true, 00:22:45.667 "write": true, 00:22:45.667 "unmap": false, 00:22:45.667 "flush": true, 00:22:45.667 "reset": true, 00:22:45.667 "nvme_admin": true, 00:22:45.667 "nvme_io": true, 00:22:45.667 "nvme_io_md": false, 00:22:45.667 "write_zeroes": true, 00:22:45.667 "zcopy": false, 00:22:45.667 "get_zone_info": false, 00:22:45.667 "zone_management": false, 00:22:45.667 "zone_append": false, 00:22:45.667 "compare": true, 00:22:45.667 "compare_and_write": true, 00:22:45.667 "abort": true, 00:22:45.667 "seek_hole": false, 00:22:45.667 "seek_data": false, 00:22:45.667 "copy": true, 00:22:45.667 "nvme_iov_md": false 00:22:45.667 }, 00:22:45.667 "memory_domains": [ 00:22:45.667 { 00:22:45.667 "dma_device_id": "system", 00:22:45.667 "dma_device_type": 1 00:22:45.667 } 00:22:45.667 ], 00:22:45.667 "driver_specific": { 00:22:45.667 "nvme": [ 00:22:45.667 { 00:22:45.667 "trid": { 00:22:45.667 "trtype": "TCP", 00:22:45.667 "adrfam": "IPv4", 00:22:45.667 "traddr": "10.0.0.2", 00:22:45.667 "trsvcid": "4420", 00:22:45.667 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:45.667 }, 00:22:45.667 "ctrlr_data": { 00:22:45.667 "cntlid": 1, 00:22:45.667 "vendor_id": "0x8086", 00:22:45.667 "model_number": "SPDK bdev Controller", 00:22:45.667 "serial_number": "00000000000000000000", 00:22:45.667 "firmware_revision": "24.09", 00:22:45.667 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:45.667 "oacs": { 00:22:45.667 "security": 0, 00:22:45.667 "format": 0, 00:22:45.667 "firmware": 0, 00:22:45.667 "ns_manage": 0 00:22:45.667 }, 00:22:45.667 "multi_ctrlr": true, 00:22:45.667 "ana_reporting": false 00:22:45.667 }, 00:22:45.667 "vs": { 00:22:45.667 "nvme_version": "1.3" 00:22:45.667 }, 00:22:45.667 "ns_data": { 00:22:45.667 "id": 1, 00:22:45.667 "can_share": true 00:22:45.667 } 00:22:45.667 } 00:22:45.667 ], 00:22:45.667 "mp_policy": "active_passive" 00:22:45.667 } 00:22:45.667 } 00:22:45.667 ] 00:22:45.667 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.667 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:45.667 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.667 19:13:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.667 [2024-07-25 19:13:38.001928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:45.667 [2024-07-25 19:13:38.002030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d91d0 (9): Bad file descriptor 00:22:45.926 [2024-07-25 19:13:38.144255] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:45.926 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.926 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:45.926 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.926 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.926 [ 00:22:45.926 { 00:22:45.926 "name": "nvme0n1", 00:22:45.926 "aliases": [ 00:22:45.926 "a7a78a1a-4f69-419c-9fbc-a933017c5402" 00:22:45.926 ], 00:22:45.926 "product_name": "NVMe disk", 00:22:45.926 "block_size": 512, 00:22:45.926 "num_blocks": 2097152, 00:22:45.926 "uuid": "a7a78a1a-4f69-419c-9fbc-a933017c5402", 00:22:45.926 "assigned_rate_limits": { 00:22:45.926 "rw_ios_per_sec": 0, 00:22:45.926 "rw_mbytes_per_sec": 0, 00:22:45.926 "r_mbytes_per_sec": 0, 00:22:45.926 "w_mbytes_per_sec": 0 00:22:45.926 }, 00:22:45.926 "claimed": false, 00:22:45.926 "zoned": false, 00:22:45.926 "supported_io_types": { 00:22:45.926 "read": true, 00:22:45.926 "write": true, 00:22:45.926 "unmap": false, 00:22:45.926 "flush": true, 00:22:45.926 "reset": true, 00:22:45.926 "nvme_admin": true, 00:22:45.926 "nvme_io": true, 00:22:45.926 "nvme_io_md": false, 00:22:45.926 "write_zeroes": true, 00:22:45.926 "zcopy": false, 00:22:45.926 "get_zone_info": false, 00:22:45.926 "zone_management": false, 00:22:45.926 "zone_append": false, 00:22:45.926 "compare": true, 00:22:45.926 "compare_and_write": true, 00:22:45.926 "abort": true, 00:22:45.926 "seek_hole": false, 00:22:45.926 "seek_data": false, 00:22:45.926 "copy": true, 00:22:45.926 "nvme_iov_md": false 00:22:45.926 }, 00:22:45.926 "memory_domains": [ 00:22:45.926 { 00:22:45.926 "dma_device_id": "system", 00:22:45.926 "dma_device_type": 1 00:22:45.926 } 00:22:45.926 ], 00:22:45.926 "driver_specific": { 00:22:45.926 "nvme": [ 00:22:45.926 { 00:22:45.926 "trid": { 00:22:45.926 "trtype": "TCP", 00:22:45.926 "adrfam": "IPv4", 00:22:45.926 "traddr": "10.0.0.2", 00:22:45.926 "trsvcid": "4420", 00:22:45.926 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:45.926 }, 00:22:45.926 "ctrlr_data": { 00:22:45.926 "cntlid": 2, 00:22:45.926 "vendor_id": "0x8086", 00:22:45.926 "model_number": "SPDK bdev Controller", 00:22:45.926 "serial_number": "00000000000000000000", 00:22:45.926 "firmware_revision": "24.09", 00:22:45.926 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:45.926 "oacs": { 00:22:45.926 "security": 0, 00:22:45.926 "format": 0, 00:22:45.926 "firmware": 0, 00:22:45.926 "ns_manage": 0 00:22:45.926 }, 00:22:45.926 "multi_ctrlr": true, 00:22:45.926 "ana_reporting": false 00:22:45.926 }, 00:22:45.926 "vs": { 00:22:45.926 "nvme_version": "1.3" 00:22:45.926 }, 00:22:45.926 "ns_data": { 00:22:45.926 "id": 1, 00:22:45.926 "can_share": true 00:22:45.926 } 00:22:45.926 } 00:22:45.926 ], 00:22:45.926 "mp_policy": "active_passive" 00:22:45.926 } 00:22:45.926 } 00:22:45.926 ] 00:22:45.926 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.926 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.926 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.926 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.926 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.926 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:45.926 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.YvHZhk5wf4 00:22:45.926 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:45.926 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.YvHZhk5wf4 00:22:45.926 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:45.926 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.926 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.926 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.926 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:45.926 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.926 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.926 [2024-07-25 19:13:38.194590] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:45.926 [2024-07-25 19:13:38.194748] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:45.926 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.926 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YvHZhk5wf4 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.927 [2024-07-25 19:13:38.202610] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YvHZhk5wf4 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.927 [2024-07-25 19:13:38.210638] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:45.927 [2024-07-25 19:13:38.210705] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:45.927 nvme0n1 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.927 [ 00:22:45.927 { 00:22:45.927 "name": "nvme0n1", 00:22:45.927 "aliases": [ 00:22:45.927 "a7a78a1a-4f69-419c-9fbc-a933017c5402" 00:22:45.927 ], 00:22:45.927 "product_name": "NVMe disk", 00:22:45.927 "block_size": 512, 00:22:45.927 "num_blocks": 2097152, 00:22:45.927 "uuid": "a7a78a1a-4f69-419c-9fbc-a933017c5402", 00:22:45.927 "assigned_rate_limits": { 00:22:45.927 "rw_ios_per_sec": 0, 00:22:45.927 "rw_mbytes_per_sec": 0, 00:22:45.927 "r_mbytes_per_sec": 0, 00:22:45.927 "w_mbytes_per_sec": 0 00:22:45.927 }, 00:22:45.927 "claimed": false, 00:22:45.927 "zoned": false, 00:22:45.927 "supported_io_types": { 00:22:45.927 "read": true, 00:22:45.927 "write": true, 00:22:45.927 "unmap": false, 00:22:45.927 "flush": true, 00:22:45.927 "reset": true, 00:22:45.927 "nvme_admin": true, 00:22:45.927 "nvme_io": true, 00:22:45.927 "nvme_io_md": false, 00:22:45.927 "write_zeroes": true, 00:22:45.927 "zcopy": false, 00:22:45.927 "get_zone_info": false, 00:22:45.927 "zone_management": false, 00:22:45.927 "zone_append": false, 00:22:45.927 "compare": true, 00:22:45.927 "compare_and_write": true, 00:22:45.927 "abort": true, 00:22:45.927 "seek_hole": false, 00:22:45.927 "seek_data": false, 00:22:45.927 "copy": true, 00:22:45.927 "nvme_iov_md": false 00:22:45.927 }, 00:22:45.927 "memory_domains": [ 00:22:45.927 { 00:22:45.927 "dma_device_id": "system", 00:22:45.927 "dma_device_type": 1 00:22:45.927 } 00:22:45.927 ], 00:22:45.927 "driver_specific": { 00:22:45.927 "nvme": [ 00:22:45.927 { 00:22:45.927 "trid": { 00:22:45.927 "trtype": "TCP", 00:22:45.927 "adrfam": "IPv4", 00:22:45.927 "traddr": "10.0.0.2", 00:22:45.927 "trsvcid": "4421", 00:22:45.927 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:45.927 }, 00:22:45.927 "ctrlr_data": { 00:22:45.927 "cntlid": 3, 00:22:45.927 "vendor_id": "0x8086", 00:22:45.927 "model_number": "SPDK bdev Controller", 00:22:45.927 "serial_number": "00000000000000000000", 00:22:45.927 "firmware_revision": "24.09", 00:22:45.927 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:45.927 "oacs": { 00:22:45.927 "security": 0, 00:22:45.927 "format": 0, 00:22:45.927 "firmware": 0, 00:22:45.927 "ns_manage": 0 00:22:45.927 }, 00:22:45.927 "multi_ctrlr": true, 00:22:45.927 "ana_reporting": false 00:22:45.927 }, 00:22:45.927 "vs": { 00:22:45.927 "nvme_version": "1.3" 00:22:45.927 }, 00:22:45.927 "ns_data": { 00:22:45.927 "id": 1, 00:22:45.927 "can_share": true 00:22:45.927 } 00:22:45.927 } 00:22:45.927 ], 00:22:45.927 "mp_policy": "active_passive" 00:22:45.927 } 00:22:45.927 } 00:22:45.927 ] 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.YvHZhk5wf4 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:45.927 rmmod nvme_tcp 00:22:45.927 rmmod nvme_fabrics 00:22:45.927 rmmod nvme_keyring 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 959313 ']' 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 959313 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 959313 ']' 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 959313 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:45.927 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 959313 00:22:46.185 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:46.185 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:46.185 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 959313' 00:22:46.185 killing process with pid 959313 00:22:46.185 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 959313 00:22:46.185 [2024-07-25 19:13:38.401531] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:46.185 [2024-07-25 19:13:38.401579] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:46.186 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 959313 00:22:46.443 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:46.443 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:46.443 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:46.443 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:46.443 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:46.443 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.443 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:46.443 19:13:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:48.387 00:22:48.387 real 0m5.994s 00:22:48.387 user 0m2.291s 00:22:48.387 sys 0m2.090s 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.387 ************************************ 00:22:48.387 END TEST nvmf_async_init 00:22:48.387 ************************************ 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.387 ************************************ 00:22:48.387 START TEST dma 00:22:48.387 ************************************ 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:48.387 * Looking for test storage... 00:22:48.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:48.387 00:22:48.387 real 0m0.067s 00:22:48.387 user 0m0.025s 00:22:48.387 sys 0m0.047s 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:48.387 ************************************ 00:22:48.387 END TEST dma 00:22:48.387 ************************************ 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:48.387 19:13:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.646 ************************************ 00:22:48.646 START TEST nvmf_identify 00:22:48.646 ************************************ 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:48.646 * Looking for test storage... 00:22:48.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.646 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:48.647 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.647 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:48.647 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:48.647 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:48.647 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:48.647 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:48.647 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:48.647 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:48.647 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:48.647 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:48.647 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:48.647 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:48.647 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:48.647 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:48.647 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:48.647 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:48.647 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:48.647 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:48.647 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.647 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.647 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.647 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:48.647 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:48.647 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:22:48.647 19:13:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:51.178 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:51.178 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:51.178 Found net devices under 0000:09:00.0: cvl_0_0 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:51.178 Found net devices under 0000:09:00.1: cvl_0_1 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:51.178 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:51.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:22:51.179 00:22:51.179 --- 10.0.0.2 ping statistics --- 00:22:51.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.179 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:22:51.179 00:22:51.179 --- 10.0.0.1 ping statistics --- 00:22:51.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.179 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=961731 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 961731 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 961731 ']' 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:51.179 19:13:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:51.179 [2024-07-25 19:13:43.552902] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:51.179 [2024-07-25 19:13:43.552981] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.179 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.179 [2024-07-25 19:13:43.638601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:51.436 [2024-07-25 19:13:43.765340] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.436 [2024-07-25 19:13:43.765395] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.436 [2024-07-25 19:13:43.765421] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.436 [2024-07-25 19:13:43.765442] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.436 [2024-07-25 19:13:43.765466] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.436 [2024-07-25 19:13:43.769127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.436 [2024-07-25 19:13:43.769159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.436 [2024-07-25 19:13:43.769272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:51.436 [2024-07-25 19:13:43.769276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.371 [2024-07-25 19:13:44.531674] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.371 Malloc0 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.371 [2024-07-25 19:13:44.602872] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.371 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.372 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:52.372 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.372 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.372 [ 00:22:52.372 { 00:22:52.372 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:52.372 "subtype": "Discovery", 00:22:52.372 "listen_addresses": [ 00:22:52.372 { 00:22:52.372 "trtype": "TCP", 00:22:52.372 "adrfam": "IPv4", 00:22:52.372 "traddr": "10.0.0.2", 00:22:52.372 "trsvcid": "4420" 00:22:52.372 } 00:22:52.372 ], 00:22:52.372 "allow_any_host": true, 00:22:52.372 "hosts": [] 00:22:52.372 }, 00:22:52.372 { 00:22:52.372 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.372 "subtype": "NVMe", 00:22:52.372 "listen_addresses": [ 00:22:52.372 { 00:22:52.372 "trtype": "TCP", 00:22:52.372 "adrfam": "IPv4", 00:22:52.372 "traddr": "10.0.0.2", 00:22:52.372 "trsvcid": "4420" 00:22:52.372 } 00:22:52.372 ], 00:22:52.372 "allow_any_host": true, 00:22:52.372 "hosts": [], 00:22:52.372 "serial_number": "SPDK00000000000001", 00:22:52.372 "model_number": "SPDK bdev Controller", 00:22:52.372 "max_namespaces": 32, 00:22:52.372 "min_cntlid": 1, 00:22:52.372 "max_cntlid": 65519, 00:22:52.372 "namespaces": [ 00:22:52.372 { 00:22:52.372 "nsid": 1, 00:22:52.372 "bdev_name": "Malloc0", 00:22:52.372 "name": "Malloc0", 00:22:52.372 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:52.372 "eui64": "ABCDEF0123456789", 00:22:52.372 "uuid": "1e41178c-c38c-4602-8ba9-7f7ff3dd0fad" 00:22:52.372 } 00:22:52.372 ] 00:22:52.372 } 00:22:52.372 ] 00:22:52.372 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.372 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:52.372 [2024-07-25 19:13:44.641490] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:52.372 [2024-07-25 19:13:44.641529] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid961885 ] 00:22:52.372 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.372 [2024-07-25 19:13:44.675436] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:52.372 [2024-07-25 19:13:44.675492] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:52.372 [2024-07-25 19:13:44.675502] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:52.372 [2024-07-25 19:13:44.675516] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:52.372 [2024-07-25 19:13:44.675529] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:52.372 [2024-07-25 19:13:44.675863] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:52.372 [2024-07-25 19:13:44.675930] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1dd0540 0 00:22:52.372 [2024-07-25 19:13:44.690127] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:52.372 [2024-07-25 19:13:44.690152] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:52.372 [2024-07-25 19:13:44.690162] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:52.372 [2024-07-25 19:13:44.690168] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:52.372 [2024-07-25 19:13:44.690216] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.372 [2024-07-25 19:13:44.690228] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.372 [2024-07-25 19:13:44.690235] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd0540) 00:22:52.372 [2024-07-25 19:13:44.690251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:52.372 [2024-07-25 19:13:44.690277] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e303c0, cid 0, qid 0 00:22:52.372 [2024-07-25 19:13:44.698135] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.372 [2024-07-25 19:13:44.698153] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.372 [2024-07-25 19:13:44.698160] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.372 [2024-07-25 19:13:44.698167] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e303c0) on tqpair=0x1dd0540 00:22:52.372 [2024-07-25 19:13:44.698185] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:52.372 [2024-07-25 19:13:44.698197] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:52.372 [2024-07-25 19:13:44.698213] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:52.372 [2024-07-25 19:13:44.698234] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.372 [2024-07-25 19:13:44.698243] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.372 [2024-07-25 19:13:44.698250] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd0540) 00:22:52.372 [2024-07-25 19:13:44.698261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.372 [2024-07-25 19:13:44.698285] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e303c0, cid 0, qid 0 00:22:52.372 [2024-07-25 19:13:44.698494] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.372 [2024-07-25 19:13:44.698506] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.372 [2024-07-25 19:13:44.698513] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.372 [2024-07-25 19:13:44.698520] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e303c0) on tqpair=0x1dd0540 00:22:52.372 [2024-07-25 19:13:44.698534] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:52.372 [2024-07-25 19:13:44.698548] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:52.372 [2024-07-25 19:13:44.698560] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.372 [2024-07-25 19:13:44.698568] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.372 [2024-07-25 19:13:44.698574] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd0540) 00:22:52.372 [2024-07-25 19:13:44.698585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.372 [2024-07-25 19:13:44.698607] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e303c0, cid 0, qid 0 00:22:52.372 [2024-07-25 19:13:44.698743] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.372 [2024-07-25 19:13:44.698755] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.372 [2024-07-25 19:13:44.698762] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.372 [2024-07-25 19:13:44.698769] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e303c0) on tqpair=0x1dd0540 00:22:52.372 [2024-07-25 19:13:44.698777] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:52.372 [2024-07-25 19:13:44.698791] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:52.372 [2024-07-25 19:13:44.698803] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.372 [2024-07-25 19:13:44.698811] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.372 [2024-07-25 19:13:44.698817] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd0540) 00:22:52.372 [2024-07-25 19:13:44.698828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.372 [2024-07-25 19:13:44.698849] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e303c0, cid 0, qid 0 00:22:52.372 [2024-07-25 19:13:44.698985] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.372 [2024-07-25 19:13:44.699001] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.372 [2024-07-25 19:13:44.699008] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.372 [2024-07-25 19:13:44.699014] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e303c0) on tqpair=0x1dd0540 00:22:52.372 [2024-07-25 19:13:44.699023] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:52.372 [2024-07-25 19:13:44.699040] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.372 [2024-07-25 19:13:44.699054] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.372 [2024-07-25 19:13:44.699061] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd0540) 00:22:52.372 [2024-07-25 19:13:44.699072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.372 [2024-07-25 19:13:44.699098] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e303c0, cid 0, qid 0 00:22:52.372 [2024-07-25 19:13:44.699245] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.372 [2024-07-25 19:13:44.699261] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.372 [2024-07-25 19:13:44.699267] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.372 [2024-07-25 19:13:44.699274] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e303c0) on tqpair=0x1dd0540 00:22:52.372 [2024-07-25 19:13:44.699283] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:52.372 [2024-07-25 19:13:44.699291] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:52.372 [2024-07-25 19:13:44.699305] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:52.372 [2024-07-25 19:13:44.699415] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:52.372 [2024-07-25 19:13:44.699423] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:52.372 [2024-07-25 19:13:44.699436] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.372 [2024-07-25 19:13:44.699444] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.372 [2024-07-25 19:13:44.699451] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd0540) 00:22:52.373 [2024-07-25 19:13:44.699477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.373 [2024-07-25 19:13:44.699499] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e303c0, cid 0, qid 0 00:22:52.373 [2024-07-25 19:13:44.699684] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.373 [2024-07-25 19:13:44.699697] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.373 [2024-07-25 19:13:44.699703] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.699710] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e303c0) on tqpair=0x1dd0540 00:22:52.373 [2024-07-25 19:13:44.699719] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:52.373 [2024-07-25 19:13:44.699735] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.699744] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.699750] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd0540) 00:22:52.373 [2024-07-25 19:13:44.699761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.373 [2024-07-25 19:13:44.699782] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e303c0, cid 0, qid 0 00:22:52.373 [2024-07-25 19:13:44.699938] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.373 [2024-07-25 19:13:44.699954] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.373 [2024-07-25 19:13:44.699960] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.699967] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e303c0) on tqpair=0x1dd0540 00:22:52.373 [2024-07-25 19:13:44.699975] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:52.373 [2024-07-25 19:13:44.699988] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:52.373 [2024-07-25 19:13:44.700002] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:52.373 [2024-07-25 19:13:44.700020] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:52.373 [2024-07-25 19:13:44.700036] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.700044] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd0540) 00:22:52.373 [2024-07-25 19:13:44.700055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.373 [2024-07-25 19:13:44.700077] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e303c0, cid 0, qid 0 00:22:52.373 [2024-07-25 19:13:44.700263] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.373 [2024-07-25 19:13:44.700279] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.373 [2024-07-25 19:13:44.700286] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.700293] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dd0540): datao=0, datal=4096, cccid=0 00:22:52.373 [2024-07-25 19:13:44.700300] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e303c0) on tqpair(0x1dd0540): expected_datao=0, payload_size=4096 00:22:52.373 [2024-07-25 19:13:44.700308] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.700336] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.700346] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.700480] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.373 [2024-07-25 19:13:44.700495] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.373 [2024-07-25 19:13:44.700502] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.700508] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e303c0) on tqpair=0x1dd0540 00:22:52.373 [2024-07-25 19:13:44.700520] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:52.373 [2024-07-25 19:13:44.700529] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:52.373 [2024-07-25 19:13:44.700537] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:52.373 [2024-07-25 19:13:44.700545] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:52.373 [2024-07-25 19:13:44.700554] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:52.373 [2024-07-25 19:13:44.700562] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:52.373 [2024-07-25 19:13:44.700576] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:52.373 [2024-07-25 19:13:44.700593] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.700602] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.700609] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd0540) 00:22:52.373 [2024-07-25 19:13:44.700620] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:52.373 [2024-07-25 19:13:44.700656] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e303c0, cid 0, qid 0 00:22:52.373 [2024-07-25 19:13:44.700820] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.373 [2024-07-25 19:13:44.700840] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.373 [2024-07-25 19:13:44.700847] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.700854] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e303c0) on tqpair=0x1dd0540 00:22:52.373 [2024-07-25 19:13:44.700866] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.700874] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.700880] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd0540) 00:22:52.373 [2024-07-25 19:13:44.700891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.373 [2024-07-25 19:13:44.700901] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.700908] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.700914] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1dd0540) 00:22:52.373 [2024-07-25 19:13:44.700923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.373 [2024-07-25 19:13:44.700933] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.700940] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.700946] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1dd0540) 00:22:52.373 [2024-07-25 19:13:44.700955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.373 [2024-07-25 19:13:44.700965] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.700972] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.700978] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd0540) 00:22:52.373 [2024-07-25 19:13:44.700987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.373 [2024-07-25 19:13:44.700996] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:52.373 [2024-07-25 19:13:44.701016] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:52.373 [2024-07-25 19:13:44.701029] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.701036] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dd0540) 00:22:52.373 [2024-07-25 19:13:44.701061] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.373 [2024-07-25 19:13:44.701083] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e303c0, cid 0, qid 0 00:22:52.373 [2024-07-25 19:13:44.701094] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30540, cid 1, qid 0 00:22:52.373 [2024-07-25 19:13:44.701125] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e306c0, cid 2, qid 0 00:22:52.373 [2024-07-25 19:13:44.701135] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30840, cid 3, qid 0 00:22:52.373 [2024-07-25 19:13:44.701143] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e309c0, cid 4, qid 0 00:22:52.373 [2024-07-25 19:13:44.701330] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.373 [2024-07-25 19:13:44.701342] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.373 [2024-07-25 19:13:44.701348] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.701355] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e309c0) on tqpair=0x1dd0540 00:22:52.373 [2024-07-25 19:13:44.701364] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:52.373 [2024-07-25 19:13:44.701378] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:52.373 [2024-07-25 19:13:44.701397] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.701406] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dd0540) 00:22:52.373 [2024-07-25 19:13:44.701417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.373 [2024-07-25 19:13:44.701438] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e309c0, cid 4, qid 0 00:22:52.373 [2024-07-25 19:13:44.701593] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.373 [2024-07-25 19:13:44.701608] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.373 [2024-07-25 19:13:44.701615] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.701621] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dd0540): datao=0, datal=4096, cccid=4 00:22:52.373 [2024-07-25 19:13:44.701629] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e309c0) on tqpair(0x1dd0540): expected_datao=0, payload_size=4096 00:22:52.373 [2024-07-25 19:13:44.701637] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.701647] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.701655] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.373 [2024-07-25 19:13:44.701732] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.373 [2024-07-25 19:13:44.701744] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.374 [2024-07-25 19:13:44.701750] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.374 [2024-07-25 19:13:44.701757] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e309c0) on tqpair=0x1dd0540 00:22:52.374 [2024-07-25 19:13:44.701775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:52.374 [2024-07-25 19:13:44.701811] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.374 [2024-07-25 19:13:44.701822] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dd0540) 00:22:52.374 [2024-07-25 19:13:44.701833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.374 [2024-07-25 19:13:44.701844] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.374 [2024-07-25 19:13:44.701851] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.374 [2024-07-25 19:13:44.701858] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1dd0540) 00:22:52.374 [2024-07-25 19:13:44.701867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.374 [2024-07-25 19:13:44.701908] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e309c0, cid 4, qid 0 00:22:52.374 [2024-07-25 19:13:44.701920] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30b40, cid 5, qid 0 00:22:52.374 [2024-07-25 19:13:44.706134] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.374 [2024-07-25 19:13:44.706161] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.374 [2024-07-25 19:13:44.706168] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.374 [2024-07-25 19:13:44.706175] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dd0540): datao=0, datal=1024, cccid=4 00:22:52.374 [2024-07-25 19:13:44.706182] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e309c0) on tqpair(0x1dd0540): expected_datao=0, payload_size=1024 00:22:52.374 [2024-07-25 19:13:44.706190] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.374 [2024-07-25 19:13:44.706200] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.374 [2024-07-25 19:13:44.706211] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.374 [2024-07-25 19:13:44.706220] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.374 [2024-07-25 19:13:44.706229] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.374 [2024-07-25 19:13:44.706235] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.374 [2024-07-25 19:13:44.706242] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30b40) on tqpair=0x1dd0540 00:22:52.374 [2024-07-25 19:13:44.746134] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.374 [2024-07-25 19:13:44.746152] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.374 [2024-07-25 19:13:44.746159] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.374 [2024-07-25 19:13:44.746166] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e309c0) on tqpair=0x1dd0540 00:22:52.374 [2024-07-25 19:13:44.746184] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.374 [2024-07-25 19:13:44.746193] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dd0540) 00:22:52.374 [2024-07-25 19:13:44.746205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.374 [2024-07-25 19:13:44.746235] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e309c0, cid 4, qid 0 00:22:52.374 [2024-07-25 19:13:44.746411] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.374 [2024-07-25 19:13:44.746426] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.374 [2024-07-25 19:13:44.746433] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.374 [2024-07-25 19:13:44.746440] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dd0540): datao=0, datal=3072, cccid=4 00:22:52.374 [2024-07-25 19:13:44.746448] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e309c0) on tqpair(0x1dd0540): expected_datao=0, payload_size=3072 00:22:52.374 [2024-07-25 19:13:44.746455] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.374 [2024-07-25 19:13:44.746498] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.374 [2024-07-25 19:13:44.746508] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.374 [2024-07-25 19:13:44.746641] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.374 [2024-07-25 19:13:44.746657] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.374 [2024-07-25 19:13:44.746663] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.374 [2024-07-25 19:13:44.746670] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e309c0) on tqpair=0x1dd0540 00:22:52.374 [2024-07-25 19:13:44.746685] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.374 [2024-07-25 19:13:44.746694] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dd0540) 00:22:52.374 [2024-07-25 19:13:44.746705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.374 [2024-07-25 19:13:44.746734] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e309c0, cid 4, qid 0 00:22:52.374 [2024-07-25 19:13:44.746889] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.374 [2024-07-25 19:13:44.746901] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.374 [2024-07-25 19:13:44.746908] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.374 [2024-07-25 19:13:44.746914] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dd0540): datao=0, datal=8, cccid=4 00:22:52.374 [2024-07-25 19:13:44.746922] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e309c0) on tqpair(0x1dd0540): expected_datao=0, payload_size=8 00:22:52.374 [2024-07-25 19:13:44.746929] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.374 [2024-07-25 19:13:44.746939] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.374 [2024-07-25 19:13:44.746947] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.374 [2024-07-25 19:13:44.787266] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.374 [2024-07-25 19:13:44.787285] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.374 [2024-07-25 19:13:44.787292] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.374 [2024-07-25 19:13:44.787300] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e309c0) on tqpair=0x1dd0540 00:22:52.374 ===================================================== 00:22:52.374 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:52.374 ===================================================== 00:22:52.374 Controller Capabilities/Features 00:22:52.374 ================================ 00:22:52.374 Vendor ID: 0000 00:22:52.374 Subsystem Vendor ID: 0000 00:22:52.374 Serial Number: .................... 00:22:52.374 Model Number: ........................................ 00:22:52.374 Firmware Version: 24.09 00:22:52.374 Recommended Arb Burst: 0 00:22:52.374 IEEE OUI Identifier: 00 00 00 00:22:52.374 Multi-path I/O 00:22:52.374 May have multiple subsystem ports: No 00:22:52.374 May have multiple controllers: No 00:22:52.374 Associated with SR-IOV VF: No 00:22:52.374 Max Data Transfer Size: 131072 00:22:52.374 Max Number of Namespaces: 0 00:22:52.374 Max Number of I/O Queues: 1024 00:22:52.374 NVMe Specification Version (VS): 1.3 00:22:52.374 NVMe Specification Version (Identify): 1.3 00:22:52.374 Maximum Queue Entries: 128 00:22:52.374 Contiguous Queues Required: Yes 00:22:52.374 Arbitration Mechanisms Supported 00:22:52.374 Weighted Round Robin: Not Supported 00:22:52.374 Vendor Specific: Not Supported 00:22:52.374 Reset Timeout: 15000 ms 00:22:52.374 Doorbell Stride: 4 bytes 00:22:52.374 NVM Subsystem Reset: Not Supported 00:22:52.374 Command Sets Supported 00:22:52.374 NVM Command Set: Supported 00:22:52.374 Boot Partition: Not Supported 00:22:52.374 Memory Page Size Minimum: 4096 bytes 00:22:52.374 Memory Page Size Maximum: 4096 bytes 00:22:52.374 Persistent Memory Region: Not Supported 00:22:52.374 Optional Asynchronous Events Supported 00:22:52.374 Namespace Attribute Notices: Not Supported 00:22:52.374 Firmware Activation Notices: Not Supported 00:22:52.374 ANA Change Notices: Not Supported 00:22:52.374 PLE Aggregate Log Change Notices: Not Supported 00:22:52.374 LBA Status Info Alert Notices: Not Supported 00:22:52.374 EGE Aggregate Log Change Notices: Not Supported 00:22:52.374 Normal NVM Subsystem Shutdown event: Not Supported 00:22:52.374 Zone Descriptor Change Notices: Not Supported 00:22:52.374 Discovery Log Change Notices: Supported 00:22:52.374 Controller Attributes 00:22:52.374 128-bit Host Identifier: Not Supported 00:22:52.374 Non-Operational Permissive Mode: Not Supported 00:22:52.374 NVM Sets: Not Supported 00:22:52.374 Read Recovery Levels: Not Supported 00:22:52.374 Endurance Groups: Not Supported 00:22:52.374 Predictable Latency Mode: Not Supported 00:22:52.374 Traffic Based Keep ALive: Not Supported 00:22:52.374 Namespace Granularity: Not Supported 00:22:52.374 SQ Associations: Not Supported 00:22:52.374 UUID List: Not Supported 00:22:52.374 Multi-Domain Subsystem: Not Supported 00:22:52.374 Fixed Capacity Management: Not Supported 00:22:52.374 Variable Capacity Management: Not Supported 00:22:52.374 Delete Endurance Group: Not Supported 00:22:52.374 Delete NVM Set: Not Supported 00:22:52.374 Extended LBA Formats Supported: Not Supported 00:22:52.374 Flexible Data Placement Supported: Not Supported 00:22:52.374 00:22:52.374 Controller Memory Buffer Support 00:22:52.374 ================================ 00:22:52.374 Supported: No 00:22:52.374 00:22:52.374 Persistent Memory Region Support 00:22:52.374 ================================ 00:22:52.374 Supported: No 00:22:52.374 00:22:52.374 Admin Command Set Attributes 00:22:52.374 ============================ 00:22:52.374 Security Send/Receive: Not Supported 00:22:52.374 Format NVM: Not Supported 00:22:52.374 Firmware Activate/Download: Not Supported 00:22:52.374 Namespace Management: Not Supported 00:22:52.374 Device Self-Test: Not Supported 00:22:52.375 Directives: Not Supported 00:22:52.375 NVMe-MI: Not Supported 00:22:52.375 Virtualization Management: Not Supported 00:22:52.375 Doorbell Buffer Config: Not Supported 00:22:52.375 Get LBA Status Capability: Not Supported 00:22:52.375 Command & Feature Lockdown Capability: Not Supported 00:22:52.375 Abort Command Limit: 1 00:22:52.375 Async Event Request Limit: 4 00:22:52.375 Number of Firmware Slots: N/A 00:22:52.375 Firmware Slot 1 Read-Only: N/A 00:22:52.375 Firmware Activation Without Reset: N/A 00:22:52.375 Multiple Update Detection Support: N/A 00:22:52.375 Firmware Update Granularity: No Information Provided 00:22:52.375 Per-Namespace SMART Log: No 00:22:52.375 Asymmetric Namespace Access Log Page: Not Supported 00:22:52.375 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:52.375 Command Effects Log Page: Not Supported 00:22:52.375 Get Log Page Extended Data: Supported 00:22:52.375 Telemetry Log Pages: Not Supported 00:22:52.375 Persistent Event Log Pages: Not Supported 00:22:52.375 Supported Log Pages Log Page: May Support 00:22:52.375 Commands Supported & Effects Log Page: Not Supported 00:22:52.375 Feature Identifiers & Effects Log Page:May Support 00:22:52.375 NVMe-MI Commands & Effects Log Page: May Support 00:22:52.375 Data Area 4 for Telemetry Log: Not Supported 00:22:52.375 Error Log Page Entries Supported: 128 00:22:52.375 Keep Alive: Not Supported 00:22:52.375 00:22:52.375 NVM Command Set Attributes 00:22:52.375 ========================== 00:22:52.375 Submission Queue Entry Size 00:22:52.375 Max: 1 00:22:52.375 Min: 1 00:22:52.375 Completion Queue Entry Size 00:22:52.375 Max: 1 00:22:52.375 Min: 1 00:22:52.375 Number of Namespaces: 0 00:22:52.375 Compare Command: Not Supported 00:22:52.375 Write Uncorrectable Command: Not Supported 00:22:52.375 Dataset Management Command: Not Supported 00:22:52.375 Write Zeroes Command: Not Supported 00:22:52.375 Set Features Save Field: Not Supported 00:22:52.375 Reservations: Not Supported 00:22:52.375 Timestamp: Not Supported 00:22:52.375 Copy: Not Supported 00:22:52.375 Volatile Write Cache: Not Present 00:22:52.375 Atomic Write Unit (Normal): 1 00:22:52.375 Atomic Write Unit (PFail): 1 00:22:52.375 Atomic Compare & Write Unit: 1 00:22:52.375 Fused Compare & Write: Supported 00:22:52.375 Scatter-Gather List 00:22:52.375 SGL Command Set: Supported 00:22:52.375 SGL Keyed: Supported 00:22:52.375 SGL Bit Bucket Descriptor: Not Supported 00:22:52.375 SGL Metadata Pointer: Not Supported 00:22:52.375 Oversized SGL: Not Supported 00:22:52.375 SGL Metadata Address: Not Supported 00:22:52.375 SGL Offset: Supported 00:22:52.375 Transport SGL Data Block: Not Supported 00:22:52.375 Replay Protected Memory Block: Not Supported 00:22:52.375 00:22:52.375 Firmware Slot Information 00:22:52.375 ========================= 00:22:52.375 Active slot: 0 00:22:52.375 00:22:52.375 00:22:52.375 Error Log 00:22:52.375 ========= 00:22:52.375 00:22:52.375 Active Namespaces 00:22:52.375 ================= 00:22:52.375 Discovery Log Page 00:22:52.375 ================== 00:22:52.375 Generation Counter: 2 00:22:52.375 Number of Records: 2 00:22:52.375 Record Format: 0 00:22:52.375 00:22:52.375 Discovery Log Entry 0 00:22:52.375 ---------------------- 00:22:52.375 Transport Type: 3 (TCP) 00:22:52.375 Address Family: 1 (IPv4) 00:22:52.375 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:52.375 Entry Flags: 00:22:52.375 Duplicate Returned Information: 1 00:22:52.375 Explicit Persistent Connection Support for Discovery: 1 00:22:52.375 Transport Requirements: 00:22:52.375 Secure Channel: Not Required 00:22:52.375 Port ID: 0 (0x0000) 00:22:52.375 Controller ID: 65535 (0xffff) 00:22:52.375 Admin Max SQ Size: 128 00:22:52.375 Transport Service Identifier: 4420 00:22:52.375 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:52.375 Transport Address: 10.0.0.2 00:22:52.375 Discovery Log Entry 1 00:22:52.375 ---------------------- 00:22:52.375 Transport Type: 3 (TCP) 00:22:52.375 Address Family: 1 (IPv4) 00:22:52.375 Subsystem Type: 2 (NVM Subsystem) 00:22:52.375 Entry Flags: 00:22:52.375 Duplicate Returned Information: 0 00:22:52.375 Explicit Persistent Connection Support for Discovery: 0 00:22:52.375 Transport Requirements: 00:22:52.375 Secure Channel: Not Required 00:22:52.375 Port ID: 0 (0x0000) 00:22:52.375 Controller ID: 65535 (0xffff) 00:22:52.375 Admin Max SQ Size: 128 00:22:52.375 Transport Service Identifier: 4420 00:22:52.375 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:52.375 Transport Address: 10.0.0.2 [2024-07-25 19:13:44.787417] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:52.375 [2024-07-25 19:13:44.787439] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e303c0) on tqpair=0x1dd0540 00:22:52.375 [2024-07-25 19:13:44.787450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.375 [2024-07-25 19:13:44.787460] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30540) on tqpair=0x1dd0540 00:22:52.375 [2024-07-25 19:13:44.787468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.375 [2024-07-25 19:13:44.787476] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e306c0) on tqpair=0x1dd0540 00:22:52.375 [2024-07-25 19:13:44.787484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.375 [2024-07-25 19:13:44.787492] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30840) on tqpair=0x1dd0540 00:22:52.375 [2024-07-25 19:13:44.787500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.375 [2024-07-25 19:13:44.787518] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.375 [2024-07-25 19:13:44.787527] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.375 [2024-07-25 19:13:44.787534] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd0540) 00:22:52.375 [2024-07-25 19:13:44.787560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.375 [2024-07-25 19:13:44.787585] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30840, cid 3, qid 0 00:22:52.375 [2024-07-25 19:13:44.787772] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.375 [2024-07-25 19:13:44.787785] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.375 [2024-07-25 19:13:44.787792] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.375 [2024-07-25 19:13:44.787798] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30840) on tqpair=0x1dd0540 00:22:52.375 [2024-07-25 19:13:44.787810] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.375 [2024-07-25 19:13:44.787818] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.375 [2024-07-25 19:13:44.787825] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd0540) 00:22:52.375 [2024-07-25 19:13:44.787835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.375 [2024-07-25 19:13:44.787862] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30840, cid 3, qid 0 00:22:52.375 [2024-07-25 19:13:44.788073] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.375 [2024-07-25 19:13:44.788088] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.375 [2024-07-25 19:13:44.788095] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.375 [2024-07-25 19:13:44.788110] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30840) on tqpair=0x1dd0540 00:22:52.375 [2024-07-25 19:13:44.788121] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:52.375 [2024-07-25 19:13:44.788130] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:52.375 [2024-07-25 19:13:44.788147] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.375 [2024-07-25 19:13:44.788160] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.375 [2024-07-25 19:13:44.788168] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd0540) 00:22:52.375 [2024-07-25 19:13:44.788178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.375 [2024-07-25 19:13:44.788200] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30840, cid 3, qid 0 00:22:52.375 [2024-07-25 19:13:44.788344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.375 [2024-07-25 19:13:44.788359] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.375 [2024-07-25 19:13:44.788366] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.375 [2024-07-25 19:13:44.788373] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30840) on tqpair=0x1dd0540 00:22:52.375 [2024-07-25 19:13:44.788390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.375 [2024-07-25 19:13:44.788400] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.375 [2024-07-25 19:13:44.788407] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd0540) 00:22:52.375 [2024-07-25 19:13:44.788417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.376 [2024-07-25 19:13:44.788438] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30840, cid 3, qid 0 00:22:52.376 [2024-07-25 19:13:44.788587] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.376 [2024-07-25 19:13:44.788599] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.376 [2024-07-25 19:13:44.788605] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.376 [2024-07-25 19:13:44.788612] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30840) on tqpair=0x1dd0540 00:22:52.376 [2024-07-25 19:13:44.788628] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.376 [2024-07-25 19:13:44.788637] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.376 [2024-07-25 19:13:44.788644] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd0540) 00:22:52.376 [2024-07-25 19:13:44.788655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.376 [2024-07-25 19:13:44.788675] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30840, cid 3, qid 0 00:22:52.376 [2024-07-25 19:13:44.788840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.376 [2024-07-25 19:13:44.788855] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.376 [2024-07-25 19:13:44.788862] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.376 [2024-07-25 19:13:44.788868] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30840) on tqpair=0x1dd0540 00:22:52.376 [2024-07-25 19:13:44.788885] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.376 [2024-07-25 19:13:44.788894] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.376 [2024-07-25 19:13:44.788901] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd0540) 00:22:52.376 [2024-07-25 19:13:44.788912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.376 [2024-07-25 19:13:44.788933] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30840, cid 3, qid 0 00:22:52.376 [2024-07-25 19:13:44.789067] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.376 [2024-07-25 19:13:44.789082] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.376 [2024-07-25 19:13:44.789089] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.376 [2024-07-25 19:13:44.789096] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30840) on tqpair=0x1dd0540 00:22:52.376 [2024-07-25 19:13:44.789121] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.376 [2024-07-25 19:13:44.789131] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.376 [2024-07-25 19:13:44.789142] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd0540) 00:22:52.376 [2024-07-25 19:13:44.789153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.376 [2024-07-25 19:13:44.789174] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30840, cid 3, qid 0 00:22:52.376 [2024-07-25 19:13:44.789315] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.376 [2024-07-25 19:13:44.789330] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.376 [2024-07-25 19:13:44.789337] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.376 [2024-07-25 19:13:44.789343] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30840) on tqpair=0x1dd0540 00:22:52.376 [2024-07-25 19:13:44.789360] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.376 [2024-07-25 19:13:44.789370] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.376 [2024-07-25 19:13:44.789377] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd0540) 00:22:52.376 [2024-07-25 19:13:44.789387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.376 [2024-07-25 19:13:44.789408] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30840, cid 3, qid 0 00:22:52.376 [2024-07-25 19:13:44.789563] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.376 [2024-07-25 19:13:44.789575] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.376 [2024-07-25 19:13:44.789581] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.376 [2024-07-25 19:13:44.789588] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30840) on tqpair=0x1dd0540 00:22:52.376 [2024-07-25 19:13:44.789604] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.376 [2024-07-25 19:13:44.789613] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.376 [2024-07-25 19:13:44.789620] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd0540) 00:22:52.376 [2024-07-25 19:13:44.789631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.376 [2024-07-25 19:13:44.789651] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30840, cid 3, qid 0 00:22:52.376 [2024-07-25 19:13:44.789786] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.376 [2024-07-25 19:13:44.789797] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.376 [2024-07-25 19:13:44.789804] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.376 [2024-07-25 19:13:44.789811] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30840) on tqpair=0x1dd0540 00:22:52.376 [2024-07-25 19:13:44.789827] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.376 [2024-07-25 19:13:44.789836] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.376 [2024-07-25 19:13:44.789843] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd0540) 00:22:52.376 [2024-07-25 19:13:44.789853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.376 [2024-07-25 19:13:44.789874] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30840, cid 3, qid 0 00:22:52.376 [2024-07-25 19:13:44.790039] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.376 [2024-07-25 19:13:44.790054] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.376 [2024-07-25 19:13:44.790061] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.376 [2024-07-25 19:13:44.790067] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30840) on tqpair=0x1dd0540 00:22:52.376 [2024-07-25 19:13:44.790084] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.376 [2024-07-25 19:13:44.790093] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.376 [2024-07-25 19:13:44.790100] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd0540) 00:22:52.376 [2024-07-25 19:13:44.794143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.376 [2024-07-25 19:13:44.794168] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30840, cid 3, qid 0 00:22:52.376 [2024-07-25 19:13:44.794330] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.376 [2024-07-25 19:13:44.794345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.376 [2024-07-25 19:13:44.794352] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.376 [2024-07-25 19:13:44.794359] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30840) on tqpair=0x1dd0540 00:22:52.376 [2024-07-25 19:13:44.794373] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:22:52.376 00:22:52.376 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:52.376 [2024-07-25 19:13:44.830230] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:52.376 [2024-07-25 19:13:44.830275] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid961888 ] 00:22:52.637 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.637 [2024-07-25 19:13:44.863922] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:52.637 [2024-07-25 19:13:44.863974] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:52.637 [2024-07-25 19:13:44.863984] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:52.637 [2024-07-25 19:13:44.863997] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:52.637 [2024-07-25 19:13:44.864009] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:52.637 [2024-07-25 19:13:44.864270] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:52.637 [2024-07-25 19:13:44.864312] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x174f540 0 00:22:52.637 [2024-07-25 19:13:44.879113] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:52.637 [2024-07-25 19:13:44.879136] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:52.637 [2024-07-25 19:13:44.879144] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:52.637 [2024-07-25 19:13:44.879150] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:52.637 [2024-07-25 19:13:44.879202] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.637 [2024-07-25 19:13:44.879214] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.637 [2024-07-25 19:13:44.879220] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x174f540) 00:22:52.637 [2024-07-25 19:13:44.879235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:52.637 [2024-07-25 19:13:44.879261] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17af3c0, cid 0, qid 0 00:22:52.637 [2024-07-25 19:13:44.887117] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.637 [2024-07-25 19:13:44.887134] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.637 [2024-07-25 19:13:44.887141] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.637 [2024-07-25 19:13:44.887148] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17af3c0) on tqpair=0x174f540 00:22:52.637 [2024-07-25 19:13:44.887168] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:52.637 [2024-07-25 19:13:44.887195] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:52.637 [2024-07-25 19:13:44.887205] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:52.637 [2024-07-25 19:13:44.887224] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.637 [2024-07-25 19:13:44.887232] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.637 [2024-07-25 19:13:44.887239] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x174f540) 00:22:52.637 [2024-07-25 19:13:44.887250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.637 [2024-07-25 19:13:44.887274] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17af3c0, cid 0, qid 0 00:22:52.637 [2024-07-25 19:13:44.887431] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.637 [2024-07-25 19:13:44.887445] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.637 [2024-07-25 19:13:44.887452] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.637 [2024-07-25 19:13:44.887459] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17af3c0) on tqpair=0x174f540 00:22:52.637 [2024-07-25 19:13:44.887470] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:52.637 [2024-07-25 19:13:44.887484] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:52.637 [2024-07-25 19:13:44.887497] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.637 [2024-07-25 19:13:44.887504] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.637 [2024-07-25 19:13:44.887510] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x174f540) 00:22:52.637 [2024-07-25 19:13:44.887521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.637 [2024-07-25 19:13:44.887542] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17af3c0, cid 0, qid 0 00:22:52.637 [2024-07-25 19:13:44.887684] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.637 [2024-07-25 19:13:44.887696] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.637 [2024-07-25 19:13:44.887702] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.637 [2024-07-25 19:13:44.887709] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17af3c0) on tqpair=0x174f540 00:22:52.637 [2024-07-25 19:13:44.887717] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:52.637 [2024-07-25 19:13:44.887731] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:52.637 [2024-07-25 19:13:44.887743] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.637 [2024-07-25 19:13:44.887750] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.637 [2024-07-25 19:13:44.887756] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x174f540) 00:22:52.637 [2024-07-25 19:13:44.887767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.637 [2024-07-25 19:13:44.887787] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17af3c0, cid 0, qid 0 00:22:52.637 [2024-07-25 19:13:44.887931] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.637 [2024-07-25 19:13:44.887943] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.637 [2024-07-25 19:13:44.887950] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.637 [2024-07-25 19:13:44.887956] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17af3c0) on tqpair=0x174f540 00:22:52.638 [2024-07-25 19:13:44.887964] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:52.638 [2024-07-25 19:13:44.887984] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.887994] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.888000] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x174f540) 00:22:52.638 [2024-07-25 19:13:44.888011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.638 [2024-07-25 19:13:44.888031] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17af3c0, cid 0, qid 0 00:22:52.638 [2024-07-25 19:13:44.888174] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.638 [2024-07-25 19:13:44.888188] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.638 [2024-07-25 19:13:44.888194] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.888201] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17af3c0) on tqpair=0x174f540 00:22:52.638 [2024-07-25 19:13:44.888208] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:52.638 [2024-07-25 19:13:44.888216] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:52.638 [2024-07-25 19:13:44.888229] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:52.638 [2024-07-25 19:13:44.888339] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:52.638 [2024-07-25 19:13:44.888346] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:52.638 [2024-07-25 19:13:44.888358] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.888365] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.888372] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x174f540) 00:22:52.638 [2024-07-25 19:13:44.888382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.638 [2024-07-25 19:13:44.888403] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17af3c0, cid 0, qid 0 00:22:52.638 [2024-07-25 19:13:44.888554] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.638 [2024-07-25 19:13:44.888570] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.638 [2024-07-25 19:13:44.888577] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.888583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17af3c0) on tqpair=0x174f540 00:22:52.638 [2024-07-25 19:13:44.888591] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:52.638 [2024-07-25 19:13:44.888608] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.888618] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.888624] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x174f540) 00:22:52.638 [2024-07-25 19:13:44.888634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.638 [2024-07-25 19:13:44.888655] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17af3c0, cid 0, qid 0 00:22:52.638 [2024-07-25 19:13:44.888800] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.638 [2024-07-25 19:13:44.888815] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.638 [2024-07-25 19:13:44.888822] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.888829] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17af3c0) on tqpair=0x174f540 00:22:52.638 [2024-07-25 19:13:44.888836] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:52.638 [2024-07-25 19:13:44.888849] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:52.638 [2024-07-25 19:13:44.888863] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:52.638 [2024-07-25 19:13:44.888880] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:52.638 [2024-07-25 19:13:44.888894] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.888902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x174f540) 00:22:52.638 [2024-07-25 19:13:44.888913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.638 [2024-07-25 19:13:44.888949] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17af3c0, cid 0, qid 0 00:22:52.638 [2024-07-25 19:13:44.889145] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.638 [2024-07-25 19:13:44.889161] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.638 [2024-07-25 19:13:44.889168] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.889174] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x174f540): datao=0, datal=4096, cccid=0 00:22:52.638 [2024-07-25 19:13:44.889182] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17af3c0) on tqpair(0x174f540): expected_datao=0, payload_size=4096 00:22:52.638 [2024-07-25 19:13:44.889189] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.889216] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.889226] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.889335] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.638 [2024-07-25 19:13:44.889347] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.638 [2024-07-25 19:13:44.889354] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.889360] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17af3c0) on tqpair=0x174f540 00:22:52.638 [2024-07-25 19:13:44.889371] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:52.638 [2024-07-25 19:13:44.889379] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:52.638 [2024-07-25 19:13:44.889386] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:52.638 [2024-07-25 19:13:44.889393] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:52.638 [2024-07-25 19:13:44.889415] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:52.638 [2024-07-25 19:13:44.889422] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:52.638 [2024-07-25 19:13:44.889436] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:52.638 [2024-07-25 19:13:44.889452] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.889461] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.889467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x174f540) 00:22:52.638 [2024-07-25 19:13:44.889477] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:52.638 [2024-07-25 19:13:44.889498] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17af3c0, cid 0, qid 0 00:22:52.638 [2024-07-25 19:13:44.889667] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.638 [2024-07-25 19:13:44.889683] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.638 [2024-07-25 19:13:44.889691] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.889697] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17af3c0) on tqpair=0x174f540 00:22:52.638 [2024-07-25 19:13:44.889707] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.889714] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.889721] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x174f540) 00:22:52.638 [2024-07-25 19:13:44.889730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.638 [2024-07-25 19:13:44.889740] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.889747] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.889753] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x174f540) 00:22:52.638 [2024-07-25 19:13:44.889761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.638 [2024-07-25 19:13:44.889771] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.889777] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.889783] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x174f540) 00:22:52.638 [2024-07-25 19:13:44.889792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.638 [2024-07-25 19:13:44.889801] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.889822] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.889828] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x174f540) 00:22:52.638 [2024-07-25 19:13:44.889837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.638 [2024-07-25 19:13:44.889845] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:52.638 [2024-07-25 19:13:44.889863] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:52.638 [2024-07-25 19:13:44.889876] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.638 [2024-07-25 19:13:44.889882] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x174f540) 00:22:52.638 [2024-07-25 19:13:44.889892] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.638 [2024-07-25 19:13:44.889914] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17af3c0, cid 0, qid 0 00:22:52.638 [2024-07-25 19:13:44.889940] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17af540, cid 1, qid 0 00:22:52.638 [2024-07-25 19:13:44.889948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17af6c0, cid 2, qid 0 00:22:52.639 [2024-07-25 19:13:44.889955] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17af840, cid 3, qid 0 00:22:52.639 [2024-07-25 19:13:44.889963] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17af9c0, cid 4, qid 0 00:22:52.639 [2024-07-25 19:13:44.890138] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.639 [2024-07-25 19:13:44.890152] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.639 [2024-07-25 19:13:44.890159] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.639 [2024-07-25 19:13:44.890165] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17af9c0) on tqpair=0x174f540 00:22:52.639 [2024-07-25 19:13:44.890173] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:52.639 [2024-07-25 19:13:44.890186] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:52.639 [2024-07-25 19:13:44.890204] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:52.639 [2024-07-25 19:13:44.890216] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:52.639 [2024-07-25 19:13:44.890226] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.639 [2024-07-25 19:13:44.890233] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.639 [2024-07-25 19:13:44.890240] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x174f540) 00:22:52.639 [2024-07-25 19:13:44.890250] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:52.639 [2024-07-25 19:13:44.890271] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17af9c0, cid 4, qid 0 00:22:52.639 [2024-07-25 19:13:44.890417] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.639 [2024-07-25 19:13:44.890433] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.639 [2024-07-25 19:13:44.890439] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.639 [2024-07-25 19:13:44.890446] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17af9c0) on tqpair=0x174f540 00:22:52.639 [2024-07-25 19:13:44.890514] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:52.639 [2024-07-25 19:13:44.890533] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:52.639 [2024-07-25 19:13:44.890547] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.639 [2024-07-25 19:13:44.890555] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x174f540) 00:22:52.639 [2024-07-25 19:13:44.890565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.639 [2024-07-25 19:13:44.890603] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17af9c0, cid 4, qid 0 00:22:52.639 [2024-07-25 19:13:44.890780] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.639 [2024-07-25 19:13:44.890797] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.639 [2024-07-25 19:13:44.890803] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.639 [2024-07-25 19:13:44.890810] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x174f540): datao=0, datal=4096, cccid=4 00:22:52.639 [2024-07-25 19:13:44.890817] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17af9c0) on tqpair(0x174f540): expected_datao=0, payload_size=4096 00:22:52.639 [2024-07-25 19:13:44.890824] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.639 [2024-07-25 19:13:44.890844] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.639 [2024-07-25 19:13:44.890853] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.639 [2024-07-25 19:13:44.934116] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.639 [2024-07-25 19:13:44.934135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.639 [2024-07-25 19:13:44.934142] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.639 [2024-07-25 19:13:44.934149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17af9c0) on tqpair=0x174f540 00:22:52.639 [2024-07-25 19:13:44.934165] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:52.639 [2024-07-25 19:13:44.934181] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:52.639 [2024-07-25 19:13:44.934200] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:52.639 [2024-07-25 19:13:44.934217] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.639 [2024-07-25 19:13:44.934226] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x174f540) 00:22:52.639 [2024-07-25 19:13:44.934237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.639 [2024-07-25 19:13:44.934261] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17af9c0, cid 4, qid 0 00:22:52.639 [2024-07-25 19:13:44.934434] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.639 [2024-07-25 19:13:44.934447] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.639 [2024-07-25 19:13:44.934453] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.639 [2024-07-25 19:13:44.934460] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x174f540): datao=0, datal=4096, cccid=4 00:22:52.639 [2024-07-25 19:13:44.934467] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17af9c0) on tqpair(0x174f540): expected_datao=0, payload_size=4096 00:22:52.639 [2024-07-25 19:13:44.934475] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.639 [2024-07-25 19:13:44.934498] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.639 [2024-07-25 19:13:44.934508] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.639 [2024-07-25 19:13:44.975229] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.639 [2024-07-25 19:13:44.975249] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.639 [2024-07-25 19:13:44.975256] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.639 [2024-07-25 19:13:44.975263] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17af9c0) on tqpair=0x174f540 00:22:52.639 [2024-07-25 19:13:44.975286] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:52.639 [2024-07-25 19:13:44.975305] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:52.639 [2024-07-25 19:13:44.975320] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.639 [2024-07-25 19:13:44.975328] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x174f540) 00:22:52.639 [2024-07-25 19:13:44.975339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.639 [2024-07-25 19:13:44.975362] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17af9c0, cid 4, qid 0 00:22:52.639 [2024-07-25 19:13:44.979114] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.639 [2024-07-25 19:13:44.979131] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.639 [2024-07-25 19:13:44.979138] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.639 [2024-07-25 19:13:44.979144] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x174f540): datao=0, datal=4096, cccid=4 00:22:52.639 [2024-07-25 19:13:44.979152] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17af9c0) on tqpair(0x174f540): expected_datao=0, payload_size=4096 00:22:52.639 [2024-07-25 19:13:44.979159] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.639 [2024-07-25 19:13:44.979169] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.639 [2024-07-25 19:13:44.979177] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.639 [2024-07-25 19:13:44.979185] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.639 [2024-07-25 19:13:44.979193] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.639 [2024-07-25 19:13:44.979200] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.639 [2024-07-25 19:13:44.979206] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17af9c0) on tqpair=0x174f540 00:22:52.639 [2024-07-25 19:13:44.979219] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:52.639 [2024-07-25 19:13:44.979239] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:52.639 [2024-07-25 19:13:44.979270] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:52.639 [2024-07-25 19:13:44.979283] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:52.639 [2024-07-25 19:13:44.979292] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:52.639 [2024-07-25 19:13:44.979300] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:52.639 [2024-07-25 19:13:44.979308] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:52.639 [2024-07-25 19:13:44.979316] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:52.639 [2024-07-25 19:13:44.979324] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:52.639 [2024-07-25 19:13:44.979344] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.639 [2024-07-25 19:13:44.979352] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x174f540) 00:22:52.639 [2024-07-25 19:13:44.979364] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.639 [2024-07-25 19:13:44.979375] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.639 [2024-07-25 19:13:44.979382] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.979403] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x174f540) 00:22:52.640 [2024-07-25 19:13:44.979412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.640 [2024-07-25 19:13:44.979438] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17af9c0, cid 4, qid 0 00:22:52.640 [2024-07-25 19:13:44.979450] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17afb40, cid 5, qid 0 00:22:52.640 [2024-07-25 19:13:44.979628] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.640 [2024-07-25 19:13:44.979641] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.640 [2024-07-25 19:13:44.979647] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.979654] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17af9c0) on tqpair=0x174f540 00:22:52.640 [2024-07-25 19:13:44.979664] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.640 [2024-07-25 19:13:44.979673] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.640 [2024-07-25 19:13:44.979680] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.979686] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17afb40) on tqpair=0x174f540 00:22:52.640 [2024-07-25 19:13:44.979701] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.979710] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x174f540) 00:22:52.640 [2024-07-25 19:13:44.979721] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.640 [2024-07-25 19:13:44.979742] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17afb40, cid 5, qid 0 00:22:52.640 [2024-07-25 19:13:44.979893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.640 [2024-07-25 19:13:44.979905] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.640 [2024-07-25 19:13:44.979915] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.979922] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17afb40) on tqpair=0x174f540 00:22:52.640 [2024-07-25 19:13:44.979938] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.979947] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x174f540) 00:22:52.640 [2024-07-25 19:13:44.979957] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.640 [2024-07-25 19:13:44.979977] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17afb40, cid 5, qid 0 00:22:52.640 [2024-07-25 19:13:44.980139] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.640 [2024-07-25 19:13:44.980155] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.640 [2024-07-25 19:13:44.980162] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.980168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17afb40) on tqpair=0x174f540 00:22:52.640 [2024-07-25 19:13:44.980185] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.980194] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x174f540) 00:22:52.640 [2024-07-25 19:13:44.980204] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.640 [2024-07-25 19:13:44.980225] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17afb40, cid 5, qid 0 00:22:52.640 [2024-07-25 19:13:44.980377] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.640 [2024-07-25 19:13:44.980389] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.640 [2024-07-25 19:13:44.980395] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.980402] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17afb40) on tqpair=0x174f540 00:22:52.640 [2024-07-25 19:13:44.980426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.980437] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x174f540) 00:22:52.640 [2024-07-25 19:13:44.980448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.640 [2024-07-25 19:13:44.980460] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.980467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x174f540) 00:22:52.640 [2024-07-25 19:13:44.980477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.640 [2024-07-25 19:13:44.980488] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.980495] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x174f540) 00:22:52.640 [2024-07-25 19:13:44.980519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.640 [2024-07-25 19:13:44.980531] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.980538] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x174f540) 00:22:52.640 [2024-07-25 19:13:44.980547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.640 [2024-07-25 19:13:44.980568] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17afb40, cid 5, qid 0 00:22:52.640 [2024-07-25 19:13:44.980594] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17af9c0, cid 4, qid 0 00:22:52.640 [2024-07-25 19:13:44.980602] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17afcc0, cid 6, qid 0 00:22:52.640 [2024-07-25 19:13:44.980613] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17afe40, cid 7, qid 0 00:22:52.640 [2024-07-25 19:13:44.980845] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.640 [2024-07-25 19:13:44.980861] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.640 [2024-07-25 19:13:44.980868] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.980874] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x174f540): datao=0, datal=8192, cccid=5 00:22:52.640 [2024-07-25 19:13:44.980881] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17afb40) on tqpair(0x174f540): expected_datao=0, payload_size=8192 00:22:52.640 [2024-07-25 19:13:44.980889] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.980985] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.980996] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.981004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.640 [2024-07-25 19:13:44.981013] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.640 [2024-07-25 19:13:44.981019] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.981026] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x174f540): datao=0, datal=512, cccid=4 00:22:52.640 [2024-07-25 19:13:44.981033] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17af9c0) on tqpair(0x174f540): expected_datao=0, payload_size=512 00:22:52.640 [2024-07-25 19:13:44.981040] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.981050] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.981056] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.981065] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.640 [2024-07-25 19:13:44.981073] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.640 [2024-07-25 19:13:44.981080] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.981086] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x174f540): datao=0, datal=512, cccid=6 00:22:52.640 [2024-07-25 19:13:44.981093] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17afcc0) on tqpair(0x174f540): expected_datao=0, payload_size=512 00:22:52.640 [2024-07-25 19:13:44.981109] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.981121] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.981128] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.981136] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.640 [2024-07-25 19:13:44.981145] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.640 [2024-07-25 19:13:44.981151] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.981158] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x174f540): datao=0, datal=4096, cccid=7 00:22:52.640 [2024-07-25 19:13:44.981165] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17afe40) on tqpair(0x174f540): expected_datao=0, payload_size=4096 00:22:52.640 [2024-07-25 19:13:44.981172] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.981182] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.981189] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.981201] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.640 [2024-07-25 19:13:44.981210] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.640 [2024-07-25 19:13:44.981216] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.981223] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17afb40) on tqpair=0x174f540 00:22:52.640 [2024-07-25 19:13:44.981241] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.640 [2024-07-25 19:13:44.981252] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.640 [2024-07-25 19:13:44.981262] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.981269] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17af9c0) on tqpair=0x174f540 00:22:52.640 [2024-07-25 19:13:44.981283] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.640 [2024-07-25 19:13:44.981293] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.640 [2024-07-25 19:13:44.981300] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.981306] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17afcc0) on tqpair=0x174f540 00:22:52.640 [2024-07-25 19:13:44.981316] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.640 [2024-07-25 19:13:44.981326] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.640 [2024-07-25 19:13:44.981332] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.640 [2024-07-25 19:13:44.981338] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17afe40) on tqpair=0x174f540 00:22:52.640 ===================================================== 00:22:52.641 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:52.641 ===================================================== 00:22:52.641 Controller Capabilities/Features 00:22:52.641 ================================ 00:22:52.641 Vendor ID: 8086 00:22:52.641 Subsystem Vendor ID: 8086 00:22:52.641 Serial Number: SPDK00000000000001 00:22:52.641 Model Number: SPDK bdev Controller 00:22:52.641 Firmware Version: 24.09 00:22:52.641 Recommended Arb Burst: 6 00:22:52.641 IEEE OUI Identifier: e4 d2 5c 00:22:52.641 Multi-path I/O 00:22:52.641 May have multiple subsystem ports: Yes 00:22:52.641 May have multiple controllers: Yes 00:22:52.641 Associated with SR-IOV VF: No 00:22:52.641 Max Data Transfer Size: 131072 00:22:52.641 Max Number of Namespaces: 32 00:22:52.641 Max Number of I/O Queues: 127 00:22:52.641 NVMe Specification Version (VS): 1.3 00:22:52.641 NVMe Specification Version (Identify): 1.3 00:22:52.641 Maximum Queue Entries: 128 00:22:52.641 Contiguous Queues Required: Yes 00:22:52.641 Arbitration Mechanisms Supported 00:22:52.641 Weighted Round Robin: Not Supported 00:22:52.641 Vendor Specific: Not Supported 00:22:52.641 Reset Timeout: 15000 ms 00:22:52.641 Doorbell Stride: 4 bytes 00:22:52.641 NVM Subsystem Reset: Not Supported 00:22:52.641 Command Sets Supported 00:22:52.641 NVM Command Set: Supported 00:22:52.641 Boot Partition: Not Supported 00:22:52.641 Memory Page Size Minimum: 4096 bytes 00:22:52.641 Memory Page Size Maximum: 4096 bytes 00:22:52.641 Persistent Memory Region: Not Supported 00:22:52.641 Optional Asynchronous Events Supported 00:22:52.641 Namespace Attribute Notices: Supported 00:22:52.641 Firmware Activation Notices: Not Supported 00:22:52.641 ANA Change Notices: Not Supported 00:22:52.641 PLE Aggregate Log Change Notices: Not Supported 00:22:52.641 LBA Status Info Alert Notices: Not Supported 00:22:52.641 EGE Aggregate Log Change Notices: Not Supported 00:22:52.641 Normal NVM Subsystem Shutdown event: Not Supported 00:22:52.641 Zone Descriptor Change Notices: Not Supported 00:22:52.641 Discovery Log Change Notices: Not Supported 00:22:52.641 Controller Attributes 00:22:52.641 128-bit Host Identifier: Supported 00:22:52.641 Non-Operational Permissive Mode: Not Supported 00:22:52.641 NVM Sets: Not Supported 00:22:52.641 Read Recovery Levels: Not Supported 00:22:52.641 Endurance Groups: Not Supported 00:22:52.641 Predictable Latency Mode: Not Supported 00:22:52.641 Traffic Based Keep ALive: Not Supported 00:22:52.641 Namespace Granularity: Not Supported 00:22:52.641 SQ Associations: Not Supported 00:22:52.641 UUID List: Not Supported 00:22:52.641 Multi-Domain Subsystem: Not Supported 00:22:52.641 Fixed Capacity Management: Not Supported 00:22:52.641 Variable Capacity Management: Not Supported 00:22:52.641 Delete Endurance Group: Not Supported 00:22:52.641 Delete NVM Set: Not Supported 00:22:52.641 Extended LBA Formats Supported: Not Supported 00:22:52.641 Flexible Data Placement Supported: Not Supported 00:22:52.641 00:22:52.641 Controller Memory Buffer Support 00:22:52.641 ================================ 00:22:52.641 Supported: No 00:22:52.641 00:22:52.641 Persistent Memory Region Support 00:22:52.641 ================================ 00:22:52.641 Supported: No 00:22:52.641 00:22:52.641 Admin Command Set Attributes 00:22:52.641 ============================ 00:22:52.641 Security Send/Receive: Not Supported 00:22:52.641 Format NVM: Not Supported 00:22:52.641 Firmware Activate/Download: Not Supported 00:22:52.641 Namespace Management: Not Supported 00:22:52.641 Device Self-Test: Not Supported 00:22:52.641 Directives: Not Supported 00:22:52.641 NVMe-MI: Not Supported 00:22:52.641 Virtualization Management: Not Supported 00:22:52.641 Doorbell Buffer Config: Not Supported 00:22:52.641 Get LBA Status Capability: Not Supported 00:22:52.641 Command & Feature Lockdown Capability: Not Supported 00:22:52.641 Abort Command Limit: 4 00:22:52.641 Async Event Request Limit: 4 00:22:52.641 Number of Firmware Slots: N/A 00:22:52.641 Firmware Slot 1 Read-Only: N/A 00:22:52.641 Firmware Activation Without Reset: N/A 00:22:52.641 Multiple Update Detection Support: N/A 00:22:52.641 Firmware Update Granularity: No Information Provided 00:22:52.641 Per-Namespace SMART Log: No 00:22:52.641 Asymmetric Namespace Access Log Page: Not Supported 00:22:52.641 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:52.641 Command Effects Log Page: Supported 00:22:52.641 Get Log Page Extended Data: Supported 00:22:52.641 Telemetry Log Pages: Not Supported 00:22:52.641 Persistent Event Log Pages: Not Supported 00:22:52.641 Supported Log Pages Log Page: May Support 00:22:52.641 Commands Supported & Effects Log Page: Not Supported 00:22:52.641 Feature Identifiers & Effects Log Page:May Support 00:22:52.641 NVMe-MI Commands & Effects Log Page: May Support 00:22:52.641 Data Area 4 for Telemetry Log: Not Supported 00:22:52.641 Error Log Page Entries Supported: 128 00:22:52.641 Keep Alive: Supported 00:22:52.641 Keep Alive Granularity: 10000 ms 00:22:52.641 00:22:52.641 NVM Command Set Attributes 00:22:52.641 ========================== 00:22:52.641 Submission Queue Entry Size 00:22:52.641 Max: 64 00:22:52.641 Min: 64 00:22:52.641 Completion Queue Entry Size 00:22:52.641 Max: 16 00:22:52.641 Min: 16 00:22:52.641 Number of Namespaces: 32 00:22:52.641 Compare Command: Supported 00:22:52.641 Write Uncorrectable Command: Not Supported 00:22:52.641 Dataset Management Command: Supported 00:22:52.641 Write Zeroes Command: Supported 00:22:52.641 Set Features Save Field: Not Supported 00:22:52.641 Reservations: Supported 00:22:52.641 Timestamp: Not Supported 00:22:52.641 Copy: Supported 00:22:52.641 Volatile Write Cache: Present 00:22:52.641 Atomic Write Unit (Normal): 1 00:22:52.641 Atomic Write Unit (PFail): 1 00:22:52.641 Atomic Compare & Write Unit: 1 00:22:52.641 Fused Compare & Write: Supported 00:22:52.641 Scatter-Gather List 00:22:52.641 SGL Command Set: Supported 00:22:52.641 SGL Keyed: Supported 00:22:52.641 SGL Bit Bucket Descriptor: Not Supported 00:22:52.641 SGL Metadata Pointer: Not Supported 00:22:52.641 Oversized SGL: Not Supported 00:22:52.641 SGL Metadata Address: Not Supported 00:22:52.641 SGL Offset: Supported 00:22:52.641 Transport SGL Data Block: Not Supported 00:22:52.641 Replay Protected Memory Block: Not Supported 00:22:52.641 00:22:52.641 Firmware Slot Information 00:22:52.641 ========================= 00:22:52.641 Active slot: 1 00:22:52.641 Slot 1 Firmware Revision: 24.09 00:22:52.641 00:22:52.641 00:22:52.641 Commands Supported and Effects 00:22:52.641 ============================== 00:22:52.641 Admin Commands 00:22:52.641 -------------- 00:22:52.641 Get Log Page (02h): Supported 00:22:52.641 Identify (06h): Supported 00:22:52.641 Abort (08h): Supported 00:22:52.641 Set Features (09h): Supported 00:22:52.641 Get Features (0Ah): Supported 00:22:52.641 Asynchronous Event Request (0Ch): Supported 00:22:52.641 Keep Alive (18h): Supported 00:22:52.641 I/O Commands 00:22:52.641 ------------ 00:22:52.641 Flush (00h): Supported LBA-Change 00:22:52.641 Write (01h): Supported LBA-Change 00:22:52.641 Read (02h): Supported 00:22:52.641 Compare (05h): Supported 00:22:52.641 Write Zeroes (08h): Supported LBA-Change 00:22:52.641 Dataset Management (09h): Supported LBA-Change 00:22:52.641 Copy (19h): Supported LBA-Change 00:22:52.641 00:22:52.641 Error Log 00:22:52.641 ========= 00:22:52.641 00:22:52.641 Arbitration 00:22:52.641 =========== 00:22:52.641 Arbitration Burst: 1 00:22:52.641 00:22:52.641 Power Management 00:22:52.641 ================ 00:22:52.641 Number of Power States: 1 00:22:52.641 Current Power State: Power State #0 00:22:52.641 Power State #0: 00:22:52.641 Max Power: 0.00 W 00:22:52.641 Non-Operational State: Operational 00:22:52.641 Entry Latency: Not Reported 00:22:52.641 Exit Latency: Not Reported 00:22:52.641 Relative Read Throughput: 0 00:22:52.641 Relative Read Latency: 0 00:22:52.641 Relative Write Throughput: 0 00:22:52.641 Relative Write Latency: 0 00:22:52.642 Idle Power: Not Reported 00:22:52.642 Active Power: Not Reported 00:22:52.642 Non-Operational Permissive Mode: Not Supported 00:22:52.642 00:22:52.642 Health Information 00:22:52.642 ================== 00:22:52.642 Critical Warnings: 00:22:52.642 Available Spare Space: OK 00:22:52.642 Temperature: OK 00:22:52.642 Device Reliability: OK 00:22:52.642 Read Only: No 00:22:52.642 Volatile Memory Backup: OK 00:22:52.642 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:52.642 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:52.642 Available Spare: 0% 00:22:52.642 Available Spare Threshold: 0% 00:22:52.642 Life Percentage Used:[2024-07-25 19:13:44.981467] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.642 [2024-07-25 19:13:44.981479] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x174f540) 00:22:52.642 [2024-07-25 19:13:44.981489] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.642 [2024-07-25 19:13:44.981511] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17afe40, cid 7, qid 0 00:22:52.642 [2024-07-25 19:13:44.981688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.642 [2024-07-25 19:13:44.981704] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.642 [2024-07-25 19:13:44.981710] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.642 [2024-07-25 19:13:44.981717] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17afe40) on tqpair=0x174f540 00:22:52.642 [2024-07-25 19:13:44.981763] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:52.642 [2024-07-25 19:13:44.981782] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17af3c0) on tqpair=0x174f540 00:22:52.642 [2024-07-25 19:13:44.981793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.642 [2024-07-25 19:13:44.981801] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17af540) on tqpair=0x174f540 00:22:52.642 [2024-07-25 19:13:44.981809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.642 [2024-07-25 19:13:44.981832] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17af6c0) on tqpair=0x174f540 00:22:52.642 [2024-07-25 19:13:44.981839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.642 [2024-07-25 19:13:44.981847] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17af840) on tqpair=0x174f540 00:22:52.642 [2024-07-25 19:13:44.981854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.642 [2024-07-25 19:13:44.981866] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.642 [2024-07-25 19:13:44.981874] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.642 [2024-07-25 19:13:44.981880] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x174f540) 00:22:52.642 [2024-07-25 19:13:44.981890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.642 [2024-07-25 19:13:44.981911] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17af840, cid 3, qid 0 00:22:52.642 [2024-07-25 19:13:44.982084] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.642 [2024-07-25 19:13:44.982097] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.642 [2024-07-25 19:13:44.982115] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.642 [2024-07-25 19:13:44.982123] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17af840) on tqpair=0x174f540 00:22:52.642 [2024-07-25 19:13:44.982134] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.642 [2024-07-25 19:13:44.982142] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.642 [2024-07-25 19:13:44.982148] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x174f540) 00:22:52.642 [2024-07-25 19:13:44.982159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.642 [2024-07-25 19:13:44.982185] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17af840, cid 3, qid 0 00:22:52.642 [2024-07-25 19:13:44.982345] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.642 [2024-07-25 19:13:44.982357] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.642 [2024-07-25 19:13:44.982364] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.642 [2024-07-25 19:13:44.982370] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17af840) on tqpair=0x174f540 00:22:52.642 [2024-07-25 19:13:44.982378] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:52.642 [2024-07-25 19:13:44.982385] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:52.642 [2024-07-25 19:13:44.982400] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.642 [2024-07-25 19:13:44.982409] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.642 [2024-07-25 19:13:44.982416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x174f540) 00:22:52.642 [2024-07-25 19:13:44.982426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.642 [2024-07-25 19:13:44.982446] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17af840, cid 3, qid 0 00:22:52.642 [2024-07-25 19:13:44.982620] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.642 [2024-07-25 19:13:44.982632] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.642 [2024-07-25 19:13:44.982639] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.642 [2024-07-25 19:13:44.982646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17af840) on tqpair=0x174f540 00:22:52.642 [2024-07-25 19:13:44.982661] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.642 [2024-07-25 19:13:44.982670] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.642 [2024-07-25 19:13:44.982677] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x174f540) 00:22:52.642 [2024-07-25 19:13:44.982687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.642 [2024-07-25 19:13:44.982707] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17af840, cid 3, qid 0 00:22:52.642 [2024-07-25 19:13:44.982859] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.642 [2024-07-25 19:13:44.982871] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.642 [2024-07-25 19:13:44.982877] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.642 [2024-07-25 19:13:44.982884] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17af840) on tqpair=0x174f540 00:22:52.642 [2024-07-25 19:13:44.982899] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.642 [2024-07-25 19:13:44.982908] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.642 [2024-07-25 19:13:44.982915] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x174f540) 00:22:52.642 [2024-07-25 19:13:44.982925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.642 [2024-07-25 19:13:44.982945] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17af840, cid 3, qid 0 00:22:52.642 [2024-07-25 19:13:44.983086] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.642 [2024-07-25 19:13:44.987109] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.642 [2024-07-25 19:13:44.987122] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.642 [2024-07-25 19:13:44.987129] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17af840) on tqpair=0x174f540 00:22:52.643 [2024-07-25 19:13:44.987162] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.643 [2024-07-25 19:13:44.987172] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.643 [2024-07-25 19:13:44.987178] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x174f540) 00:22:52.643 [2024-07-25 19:13:44.987189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.643 [2024-07-25 19:13:44.987211] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17af840, cid 3, qid 0 00:22:52.643 [2024-07-25 19:13:44.987364] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.643 [2024-07-25 19:13:44.987380] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.643 [2024-07-25 19:13:44.987386] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.643 [2024-07-25 19:13:44.987393] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17af840) on tqpair=0x174f540 00:22:52.643 [2024-07-25 19:13:44.987406] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:22:52.643 0% 00:22:52.643 Data Units Read: 0 00:22:52.643 Data Units Written: 0 00:22:52.643 Host Read Commands: 0 00:22:52.643 Host Write Commands: 0 00:22:52.643 Controller Busy Time: 0 minutes 00:22:52.643 Power Cycles: 0 00:22:52.643 Power On Hours: 0 hours 00:22:52.643 Unsafe Shutdowns: 0 00:22:52.643 Unrecoverable Media Errors: 0 00:22:52.643 Lifetime Error Log Entries: 0 00:22:52.643 Warning Temperature Time: 0 minutes 00:22:52.643 Critical Temperature Time: 0 minutes 00:22:52.643 00:22:52.643 Number of Queues 00:22:52.643 ================ 00:22:52.643 Number of I/O Submission Queues: 127 00:22:52.643 Number of I/O Completion Queues: 127 00:22:52.643 00:22:52.643 Active Namespaces 00:22:52.643 ================= 00:22:52.643 Namespace ID:1 00:22:52.643 Error Recovery Timeout: Unlimited 00:22:52.643 Command Set Identifier: NVM (00h) 00:22:52.643 Deallocate: Supported 00:22:52.643 Deallocated/Unwritten Error: Not Supported 00:22:52.643 Deallocated Read Value: Unknown 00:22:52.643 Deallocate in Write Zeroes: Not Supported 00:22:52.643 Deallocated Guard Field: 0xFFFF 00:22:52.643 Flush: Supported 00:22:52.643 Reservation: Supported 00:22:52.643 Namespace Sharing Capabilities: Multiple Controllers 00:22:52.643 Size (in LBAs): 131072 (0GiB) 00:22:52.643 Capacity (in LBAs): 131072 (0GiB) 00:22:52.643 Utilization (in LBAs): 131072 (0GiB) 00:22:52.643 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:52.643 EUI64: ABCDEF0123456789 00:22:52.643 UUID: 1e41178c-c38c-4602-8ba9-7f7ff3dd0fad 00:22:52.643 Thin Provisioning: Not Supported 00:22:52.643 Per-NS Atomic Units: Yes 00:22:52.643 Atomic Boundary Size (Normal): 0 00:22:52.643 Atomic Boundary Size (PFail): 0 00:22:52.643 Atomic Boundary Offset: 0 00:22:52.643 Maximum Single Source Range Length: 65535 00:22:52.643 Maximum Copy Length: 65535 00:22:52.643 Maximum Source Range Count: 1 00:22:52.643 NGUID/EUI64 Never Reused: No 00:22:52.643 Namespace Write Protected: No 00:22:52.643 Number of LBA Formats: 1 00:22:52.643 Current LBA Format: LBA Format #00 00:22:52.643 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:52.643 00:22:52.643 19:13:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:52.643 rmmod nvme_tcp 00:22:52.643 rmmod nvme_fabrics 00:22:52.643 rmmod nvme_keyring 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 961731 ']' 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 961731 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 961731 ']' 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 961731 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 961731 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 961731' 00:22:52.643 killing process with pid 961731 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 961731 00:22:52.643 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 961731 00:22:53.210 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:53.210 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:53.210 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:53.210 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:53.210 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:53.210 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.210 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.210 19:13:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:55.113 00:22:55.113 real 0m6.568s 00:22:55.113 user 0m7.389s 00:22:55.113 sys 0m2.220s 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:55.113 ************************************ 00:22:55.113 END TEST nvmf_identify 00:22:55.113 ************************************ 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.113 ************************************ 00:22:55.113 START TEST nvmf_perf 00:22:55.113 ************************************ 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:55.113 * Looking for test storage... 00:22:55.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:55.113 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:55.114 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:55.114 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:55.114 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:55.114 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:55.114 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:55.114 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.114 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:55.114 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:55.114 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:55.114 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.114 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:55.114 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.114 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:55.114 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:55.114 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:55.114 19:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:57.644 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:57.645 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:57.645 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:57.645 Found net devices under 0000:09:00.0: cvl_0_0 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:57.645 Found net devices under 0000:09:00.1: cvl_0_1 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:57.645 19:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:57.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:22:57.645 00:22:57.645 --- 10.0.0.2 ping statistics --- 00:22:57.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.645 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:57.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:22:57.645 00:22:57.645 --- 10.0.0.1 ping statistics --- 00:22:57.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.645 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=964228 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 964228 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 964228 ']' 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:57.645 19:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:57.906 [2024-07-25 19:13:50.152355] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:57.906 [2024-07-25 19:13:50.152447] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.906 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.906 [2024-07-25 19:13:50.230808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:57.906 [2024-07-25 19:13:50.342986] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.906 [2024-07-25 19:13:50.343041] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.906 [2024-07-25 19:13:50.343071] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.906 [2024-07-25 19:13:50.343083] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.906 [2024-07-25 19:13:50.343094] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.906 [2024-07-25 19:13:50.343165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.906 [2024-07-25 19:13:50.343225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.906 [2024-07-25 19:13:50.343277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:57.906 [2024-07-25 19:13:50.343280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.840 19:13:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:58.840 19:13:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:22:58.840 19:13:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:58.840 19:13:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:58.840 19:13:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:58.840 19:13:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.840 19:13:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:58.840 19:13:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:02.111 19:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:02.111 19:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:02.111 19:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:23:02.111 19:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:02.368 19:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:02.368 19:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:23:02.368 19:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:02.368 19:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:02.368 19:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:02.625 [2024-07-25 19:13:54.977995] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.625 19:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:02.883 19:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:02.883 19:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:03.140 19:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:03.140 19:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:03.397 19:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:03.654 [2024-07-25 19:13:55.973656] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.654 19:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:03.910 19:13:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:23:03.911 19:13:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:23:03.911 19:13:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:03.911 19:13:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:23:05.280 Initializing NVMe Controllers 00:23:05.280 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:23:05.280 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:23:05.280 Initialization complete. Launching workers. 00:23:05.280 ======================================================== 00:23:05.280 Latency(us) 00:23:05.280 Device Information : IOPS MiB/s Average min max 00:23:05.280 PCIE (0000:0b:00.0) NSID 1 from core 0: 85453.92 333.80 373.76 22.30 4387.03 00:23:05.280 ======================================================== 00:23:05.280 Total : 85453.92 333.80 373.76 22.30 4387.03 00:23:05.280 00:23:05.280 19:13:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:05.280 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.652 Initializing NVMe Controllers 00:23:06.653 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:06.653 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:06.653 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:06.653 Initialization complete. Launching workers. 00:23:06.653 ======================================================== 00:23:06.653 Latency(us) 00:23:06.653 Device Information : IOPS MiB/s Average min max 00:23:06.653 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 56.00 0.22 18279.62 230.54 45723.36 00:23:06.653 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 57.00 0.22 17780.09 7941.48 47897.76 00:23:06.653 ======================================================== 00:23:06.653 Total : 113.00 0.44 18027.64 230.54 47897.76 00:23:06.653 00:23:06.653 19:13:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:06.653 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.024 Initializing NVMe Controllers 00:23:08.024 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:08.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:08.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:08.024 Initialization complete. Launching workers. 00:23:08.024 ======================================================== 00:23:08.024 Latency(us) 00:23:08.024 Device Information : IOPS MiB/s Average min max 00:23:08.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8079.85 31.56 3961.00 430.90 9778.40 00:23:08.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3890.93 15.20 8261.67 6027.36 15987.29 00:23:08.024 ======================================================== 00:23:08.024 Total : 11970.77 46.76 5358.87 430.90 15987.29 00:23:08.024 00:23:08.024 19:14:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:08.024 19:14:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:08.024 19:14:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:08.024 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.586 Initializing NVMe Controllers 00:23:10.586 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:10.586 Controller IO queue size 128, less than required. 00:23:10.586 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:10.586 Controller IO queue size 128, less than required. 00:23:10.586 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:10.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:10.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:10.586 Initialization complete. Launching workers. 00:23:10.586 ======================================================== 00:23:10.586 Latency(us) 00:23:10.586 Device Information : IOPS MiB/s Average min max 00:23:10.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 805.90 201.47 166197.05 88416.03 253456.90 00:23:10.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 474.47 118.62 276943.30 169174.92 416624.18 00:23:10.586 ======================================================== 00:23:10.586 Total : 1280.36 320.09 207236.53 88416.03 416624.18 00:23:10.586 00:23:10.586 19:14:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:10.586 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.843 No valid NVMe controllers or AIO or URING devices found 00:23:10.843 Initializing NVMe Controllers 00:23:10.843 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:10.843 Controller IO queue size 128, less than required. 00:23:10.843 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:10.843 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:10.843 Controller IO queue size 128, less than required. 00:23:10.843 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:10.843 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:10.843 WARNING: Some requested NVMe devices were skipped 00:23:10.844 19:14:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:10.844 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.367 Initializing NVMe Controllers 00:23:13.367 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:13.367 Controller IO queue size 128, less than required. 00:23:13.367 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:13.367 Controller IO queue size 128, less than required. 00:23:13.367 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:13.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:13.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:13.367 Initialization complete. Launching workers. 00:23:13.367 00:23:13.367 ==================== 00:23:13.367 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:13.367 TCP transport: 00:23:13.367 polls: 8120 00:23:13.367 idle_polls: 6421 00:23:13.367 sock_completions: 1699 00:23:13.367 nvme_completions: 3307 00:23:13.367 submitted_requests: 4962 00:23:13.367 queued_requests: 1 00:23:13.367 00:23:13.367 ==================== 00:23:13.367 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:13.367 TCP transport: 00:23:13.367 polls: 10862 00:23:13.367 idle_polls: 8030 00:23:13.367 sock_completions: 2832 00:23:13.367 nvme_completions: 2805 00:23:13.367 submitted_requests: 4198 00:23:13.367 queued_requests: 1 00:23:13.367 ======================================================== 00:23:13.367 Latency(us) 00:23:13.367 Device Information : IOPS MiB/s Average min max 00:23:13.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 826.26 206.57 163352.10 109583.64 279075.39 00:23:13.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 700.80 175.20 187835.37 71263.18 292985.62 00:23:13.367 ======================================================== 00:23:13.367 Total : 1527.06 381.76 174587.96 71263.18 292985.62 00:23:13.367 00:23:13.367 19:14:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:13.367 19:14:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:13.625 19:14:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:13.625 19:14:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:13.625 19:14:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:13.625 19:14:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:13.625 19:14:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:23:13.625 19:14:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:13.625 19:14:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:23:13.625 19:14:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:13.625 19:14:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:13.625 rmmod nvme_tcp 00:23:13.625 rmmod nvme_fabrics 00:23:13.625 rmmod nvme_keyring 00:23:13.625 19:14:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:13.625 19:14:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:23:13.625 19:14:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:23:13.625 19:14:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 964228 ']' 00:23:13.625 19:14:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 964228 00:23:13.625 19:14:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 964228 ']' 00:23:13.625 19:14:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 964228 00:23:13.625 19:14:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:23:13.625 19:14:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:13.625 19:14:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 964228 00:23:13.625 19:14:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:13.625 19:14:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:13.625 19:14:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 964228' 00:23:13.625 killing process with pid 964228 00:23:13.625 19:14:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 964228 00:23:13.625 19:14:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 964228 00:23:15.523 19:14:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:15.523 19:14:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:15.523 19:14:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:15.523 19:14:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:15.523 19:14:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:15.523 19:14:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.523 19:14:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.523 19:14:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:17.428 00:23:17.428 real 0m22.201s 00:23:17.428 user 1m6.598s 00:23:17.428 sys 0m5.348s 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:17.428 ************************************ 00:23:17.428 END TEST nvmf_perf 00:23:17.428 ************************************ 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.428 ************************************ 00:23:17.428 START TEST nvmf_fio_host 00:23:17.428 ************************************ 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:17.428 * Looking for test storage... 00:23:17.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:17.428 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:17.429 19:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:19.960 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:19.961 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:19.961 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:19.961 Found net devices under 0000:09:00.0: cvl_0_0 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:19.961 Found net devices under 0000:09:00.1: cvl_0_1 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:19.961 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.220 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:20.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:23:20.220 00:23:20.220 --- 10.0.0.2 ping statistics --- 00:23:20.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.220 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:23:20.220 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:23:20.220 00:23:20.220 --- 10.0.0.1 ping statistics --- 00:23:20.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.220 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:23:20.220 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.220 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:23:20.220 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:20.220 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.220 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:20.220 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:20.220 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.220 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:20.220 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:20.220 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:20.220 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:20.220 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:20.220 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.220 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=968605 00:23:20.220 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:20.220 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:20.220 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 968605 00:23:20.220 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 968605 ']' 00:23:20.220 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.220 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:20.220 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.220 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:20.220 19:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.220 [2024-07-25 19:14:12.515309] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:20.220 [2024-07-25 19:14:12.515379] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.220 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.220 [2024-07-25 19:14:12.589874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:20.478 [2024-07-25 19:14:12.700118] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.478 [2024-07-25 19:14:12.700177] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.478 [2024-07-25 19:14:12.700198] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.478 [2024-07-25 19:14:12.700216] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.478 [2024-07-25 19:14:12.700231] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.478 [2024-07-25 19:14:12.700301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.478 [2024-07-25 19:14:12.700331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.478 [2024-07-25 19:14:12.700390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:20.478 [2024-07-25 19:14:12.700404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.043 19:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:21.043 19:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:23:21.043 19:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:21.608 [2024-07-25 19:14:13.779782] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.608 19:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:21.608 19:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:21.608 19:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.608 19:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:21.608 Malloc1 00:23:21.866 19:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:21.866 19:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:22.123 19:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:22.381 [2024-07-25 19:14:14.798886] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.381 19:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:22.638 19:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:22.638 19:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:22.638 19:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:22.638 19:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:22.638 19:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:22.638 19:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:22.638 19:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:22.638 19:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:22.638 19:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:22.638 19:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:22.638 19:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:22.638 19:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:22.638 19:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:22.638 19:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:22.638 19:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:22.638 19:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:22.638 19:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:22.638 19:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:22.638 19:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:22.896 19:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:22.896 19:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:22.896 19:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:22.896 19:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:22.896 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:22.896 fio-3.35 00:23:22.896 Starting 1 thread 00:23:22.896 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.426 00:23:25.426 test: (groupid=0, jobs=1): err= 0: pid=968975: Thu Jul 25 19:14:17 2024 00:23:25.426 read: IOPS=8714, BW=34.0MiB/s (35.7MB/s)(68.3MiB/2006msec) 00:23:25.426 slat (usec): min=2, max=148, avg= 2.76, stdev= 1.74 00:23:25.426 clat (usec): min=3198, max=13606, avg=8104.08, stdev=626.82 00:23:25.426 lat (usec): min=3227, max=13617, avg=8106.84, stdev=626.66 00:23:25.426 clat percentiles (usec): 00:23:25.426 | 1.00th=[ 6718], 5.00th=[ 7111], 10.00th=[ 7373], 20.00th=[ 7635], 00:23:25.426 | 30.00th=[ 7832], 40.00th=[ 7963], 50.00th=[ 8094], 60.00th=[ 8225], 00:23:25.426 | 70.00th=[ 8455], 80.00th=[ 8586], 90.00th=[ 8848], 95.00th=[ 9110], 00:23:25.426 | 99.00th=[ 9503], 99.50th=[ 9634], 99.90th=[11994], 99.95th=[12256], 00:23:25.426 | 99.99th=[13566] 00:23:25.426 bw ( KiB/s): min=34400, max=35600, per=99.93%, avg=34832.00, stdev=526.95, samples=4 00:23:25.426 iops : min= 8600, max= 8900, avg=8708.00, stdev=131.74, samples=4 00:23:25.426 write: IOPS=8709, BW=34.0MiB/s (35.7MB/s)(68.2MiB/2006msec); 0 zone resets 00:23:25.426 slat (usec): min=2, max=142, avg= 2.89, stdev= 1.55 00:23:25.426 clat (usec): min=1354, max=12843, avg=6486.89, stdev=553.59 00:23:25.426 lat (usec): min=1363, max=12846, avg=6489.78, stdev=553.51 00:23:25.426 clat percentiles (usec): 00:23:25.426 | 1.00th=[ 5276], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6063], 00:23:25.426 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6456], 60.00th=[ 6587], 00:23:25.426 | 70.00th=[ 6718], 80.00th=[ 6915], 90.00th=[ 7111], 95.00th=[ 7242], 00:23:25.426 | 99.00th=[ 7635], 99.50th=[ 7898], 99.90th=[11207], 99.95th=[11994], 00:23:25.426 | 99.99th=[12780] 00:23:25.426 bw ( KiB/s): min=33840, max=35528, per=99.94%, avg=34818.00, stdev=770.12, samples=4 00:23:25.426 iops : min= 8460, max= 8882, avg=8704.50, stdev=192.53, samples=4 00:23:25.426 lat (msec) : 2=0.02%, 4=0.12%, 10=99.66%, 20=0.20% 00:23:25.426 cpu : usr=56.01%, sys=36.71%, ctx=79, majf=0, minf=38 00:23:25.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:25.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:25.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:25.426 issued rwts: total=17481,17471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:25.426 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:25.426 00:23:25.426 Run status group 0 (all jobs): 00:23:25.426 READ: bw=34.0MiB/s (35.7MB/s), 34.0MiB/s-34.0MiB/s (35.7MB/s-35.7MB/s), io=68.3MiB (71.6MB), run=2006-2006msec 00:23:25.426 WRITE: bw=34.0MiB/s (35.7MB/s), 34.0MiB/s-34.0MiB/s (35.7MB/s-35.7MB/s), io=68.2MiB (71.6MB), run=2006-2006msec 00:23:25.426 19:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:25.426 19:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:25.426 19:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:25.426 19:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:25.426 19:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:25.426 19:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:25.426 19:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:25.426 19:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:25.426 19:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:25.426 19:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:25.426 19:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:25.426 19:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:25.426 19:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:25.426 19:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:25.426 19:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:25.426 19:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:25.426 19:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:25.426 19:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:25.426 19:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:25.426 19:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:25.426 19:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:25.426 19:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:25.684 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:25.684 fio-3.35 00:23:25.684 Starting 1 thread 00:23:25.684 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.211 00:23:28.211 test: (groupid=0, jobs=1): err= 0: pid=969428: Thu Jul 25 19:14:20 2024 00:23:28.211 read: IOPS=8095, BW=126MiB/s (133MB/s)(254MiB/2005msec) 00:23:28.211 slat (nsec): min=2776, max=96057, avg=3709.37, stdev=1563.13 00:23:28.211 clat (usec): min=3544, max=19630, avg=9538.47, stdev=2389.61 00:23:28.211 lat (usec): min=3547, max=19633, avg=9542.17, stdev=2389.66 00:23:28.211 clat percentiles (usec): 00:23:28.211 | 1.00th=[ 4817], 5.00th=[ 5800], 10.00th=[ 6521], 20.00th=[ 7308], 00:23:28.211 | 30.00th=[ 8029], 40.00th=[ 8717], 50.00th=[ 9503], 60.00th=[10290], 00:23:28.211 | 70.00th=[10945], 80.00th=[11600], 90.00th=[12518], 95.00th=[13435], 00:23:28.211 | 99.00th=[15664], 99.50th=[16188], 99.90th=[17171], 99.95th=[17433], 00:23:28.211 | 99.99th=[18220] 00:23:28.211 bw ( KiB/s): min=54240, max=73536, per=51.02%, avg=66084.50, stdev=8540.70, samples=4 00:23:28.211 iops : min= 3390, max= 4596, avg=4130.25, stdev=533.77, samples=4 00:23:28.211 write: IOPS=4677, BW=73.1MiB/s (76.6MB/s)(135MiB/1847msec); 0 zone resets 00:23:28.211 slat (usec): min=30, max=148, avg=33.52, stdev= 4.84 00:23:28.211 clat (usec): min=3795, max=19745, avg=11058.15, stdev=1966.04 00:23:28.211 lat (usec): min=3828, max=19777, avg=11091.66, stdev=1966.08 00:23:28.211 clat percentiles (usec): 00:23:28.211 | 1.00th=[ 7308], 5.00th=[ 8356], 10.00th=[ 8848], 20.00th=[ 9503], 00:23:28.211 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10683], 60.00th=[11207], 00:23:28.211 | 70.00th=[11863], 80.00th=[12649], 90.00th=[13698], 95.00th=[14484], 00:23:28.211 | 99.00th=[16712], 99.50th=[17433], 99.90th=[18744], 99.95th=[19268], 00:23:28.211 | 99.99th=[19792] 00:23:28.211 bw ( KiB/s): min=57056, max=76768, per=91.56%, avg=68530.75, stdev=8847.46, samples=4 00:23:28.211 iops : min= 3566, max= 4798, avg=4283.00, stdev=552.82, samples=4 00:23:28.211 lat (msec) : 4=0.12%, 10=47.41%, 20=52.47% 00:23:28.211 cpu : usr=73.60%, sys=22.50%, ctx=23, majf=0, minf=52 00:23:28.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:23:28.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:28.211 issued rwts: total=16231,8640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:28.211 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:28.211 00:23:28.211 Run status group 0 (all jobs): 00:23:28.211 READ: bw=126MiB/s (133MB/s), 126MiB/s-126MiB/s (133MB/s-133MB/s), io=254MiB (266MB), run=2005-2005msec 00:23:28.211 WRITE: bw=73.1MiB/s (76.6MB/s), 73.1MiB/s-73.1MiB/s (76.6MB/s-76.6MB/s), io=135MiB (142MB), run=1847-1847msec 00:23:28.211 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:28.211 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:28.211 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:28.211 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:28.211 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:28.211 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:28.211 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:23:28.211 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:28.211 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:23:28.211 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:28.211 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:28.211 rmmod nvme_tcp 00:23:28.211 rmmod nvme_fabrics 00:23:28.211 rmmod nvme_keyring 00:23:28.211 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:28.211 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:23:28.211 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:23:28.211 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 968605 ']' 00:23:28.211 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 968605 00:23:28.211 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 968605 ']' 00:23:28.211 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 968605 00:23:28.211 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:23:28.211 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:28.211 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 968605 00:23:28.211 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:28.211 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:28.211 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 968605' 00:23:28.211 killing process with pid 968605 00:23:28.211 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 968605 00:23:28.211 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 968605 00:23:28.471 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:28.471 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:28.471 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:28.471 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:28.471 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:28.471 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.471 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.471 19:14:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.004 19:14:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:31.004 00:23:31.004 real 0m13.171s 00:23:31.004 user 0m38.013s 00:23:31.004 sys 0m4.402s 00:23:31.004 19:14:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:31.004 19:14:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.004 ************************************ 00:23:31.004 END TEST nvmf_fio_host 00:23:31.004 ************************************ 00:23:31.004 19:14:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:31.005 19:14:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:31.005 19:14:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:31.005 19:14:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.005 ************************************ 00:23:31.005 START TEST nvmf_failover 00:23:31.005 ************************************ 00:23:31.005 19:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:31.005 * Looking for test storage... 00:23:31.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:23:31.005 19:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:32.923 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:32.923 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.923 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:32.924 Found net devices under 0000:09:00.0: cvl_0_0 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:32.924 Found net devices under 0000:09:00.1: cvl_0_1 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.924 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:33.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:23:33.199 00:23:33.199 --- 10.0.0.2 ping statistics --- 00:23:33.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.199 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:33.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:33.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:23:33.199 00:23:33.199 --- 10.0.0.1 ping statistics --- 00:23:33.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.199 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=971915 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 971915 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 971915 ']' 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:33.199 19:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:33.199 [2024-07-25 19:14:25.540583] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:33.199 [2024-07-25 19:14:25.540672] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.199 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.199 [2024-07-25 19:14:25.627581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:33.457 [2024-07-25 19:14:25.739541] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.457 [2024-07-25 19:14:25.739595] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.457 [2024-07-25 19:14:25.739623] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.457 [2024-07-25 19:14:25.739634] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.457 [2024-07-25 19:14:25.739643] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.457 [2024-07-25 19:14:25.743122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.457 [2024-07-25 19:14:25.743190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:33.457 [2024-07-25 19:14:25.743194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.390 19:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:34.390 19:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:34.390 19:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:34.390 19:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:34.390 19:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:34.390 19:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.390 19:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:34.390 [2024-07-25 19:14:26.804489] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.390 19:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:34.648 Malloc0 00:23:34.648 19:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:34.905 19:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:35.163 19:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:35.421 [2024-07-25 19:14:27.807797] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.421 19:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:35.678 [2024-07-25 19:14:28.044522] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:35.678 19:14:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:35.936 [2024-07-25 19:14:28.293358] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:35.936 19:14:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=972330 00:23:35.936 19:14:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:35.936 19:14:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:35.936 19:14:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 972330 /var/tmp/bdevperf.sock 00:23:35.936 19:14:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 972330 ']' 00:23:35.936 19:14:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.936 19:14:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:35.936 19:14:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.936 19:14:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:35.936 19:14:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:37.307 19:14:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:37.307 19:14:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:37.307 19:14:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:37.564 NVMe0n1 00:23:37.564 19:14:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:37.821 00:23:37.821 19:14:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=972481 00:23:37.821 19:14:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:37.821 19:14:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:39.195 19:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:39.195 [2024-07-25 19:14:31.525398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 [2024-07-25 19:14:31.525932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95f40 is same with the state(5) to be set 00:23:39.195 19:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:42.476 19:14:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:42.476 00:23:42.734 19:14:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:42.992 [2024-07-25 19:14:35.235859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a96d10 is same with the state(5) to be set 00:23:42.992 [2024-07-25 19:14:35.235913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a96d10 is same with the state(5) to be set 00:23:42.992 [2024-07-25 19:14:35.235927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a96d10 is same with the state(5) to be set 00:23:42.992 [2024-07-25 19:14:35.235940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a96d10 is same with the state(5) to be set 00:23:42.992 [2024-07-25 19:14:35.235953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a96d10 is same with the state(5) to be set 00:23:42.992 [2024-07-25 19:14:35.235965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a96d10 is same with the state(5) to be set 00:23:42.992 [2024-07-25 19:14:35.235977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a96d10 is same with the state(5) to be set 00:23:42.992 [2024-07-25 19:14:35.235989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a96d10 is same with the state(5) to be set 00:23:42.992 [2024-07-25 19:14:35.236000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a96d10 is same with the state(5) to be set 00:23:42.992 [2024-07-25 19:14:35.236012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a96d10 is same with the state(5) to be set 00:23:42.992 [2024-07-25 19:14:35.236024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a96d10 is same with the state(5) to be set 00:23:42.992 [2024-07-25 19:14:35.236036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a96d10 is same with the state(5) to be set 00:23:42.992 [2024-07-25 19:14:35.236048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a96d10 is same with the state(5) to be set 00:23:42.992 [2024-07-25 19:14:35.236059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a96d10 is same with the state(5) to be set 00:23:42.992 [2024-07-25 19:14:35.236071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a96d10 is same with the state(5) to be set 00:23:42.992 [2024-07-25 19:14:35.236083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a96d10 is same with the state(5) to be set 00:23:42.992 [2024-07-25 19:14:35.236127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a96d10 is same with the state(5) to be set 00:23:42.992 [2024-07-25 19:14:35.236141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a96d10 is same with the state(5) to be set 00:23:42.992 [2024-07-25 19:14:35.236153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a96d10 is same with the state(5) to be set 00:23:42.992 [2024-07-25 19:14:35.236165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a96d10 is same with the state(5) to be set 00:23:42.992 [2024-07-25 19:14:35.236176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a96d10 is same with the state(5) to be set 00:23:42.992 [2024-07-25 19:14:35.236188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a96d10 is same with the state(5) to be set 00:23:42.993 [2024-07-25 19:14:35.236199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a96d10 is same with the state(5) to be set 00:23:42.993 19:14:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:46.271 19:14:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:46.271 [2024-07-25 19:14:38.533051] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.271 19:14:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:47.205 19:14:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:47.464 [2024-07-25 19:14:39.786088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786255] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.786992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.787005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.787016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.787028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.787039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.464 [2024-07-25 19:14:39.787051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.465 [2024-07-25 19:14:39.787063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.465 [2024-07-25 19:14:39.787074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.465 [2024-07-25 19:14:39.787086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.465 [2024-07-25 19:14:39.787097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.465 [2024-07-25 19:14:39.787132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.465 [2024-07-25 19:14:39.787145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.465 [2024-07-25 19:14:39.787157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97ab0 is same with the state(5) to be set 00:23:47.465 19:14:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 972481 00:23:54.030 0 00:23:54.030 19:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 972330 00:23:54.030 19:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 972330 ']' 00:23:54.030 19:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 972330 00:23:54.030 19:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:54.030 19:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:54.030 19:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 972330 00:23:54.030 19:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:54.030 19:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:54.030 19:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 972330' 00:23:54.030 killing process with pid 972330 00:23:54.030 19:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 972330 00:23:54.030 19:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 972330 00:23:54.030 19:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:54.030 [2024-07-25 19:14:28.356614] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:54.030 [2024-07-25 19:14:28.356695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid972330 ] 00:23:54.030 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.030 [2024-07-25 19:14:28.424908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.030 [2024-07-25 19:14:28.537646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.030 Running I/O for 15 seconds... 00:23:54.031 [2024-07-25 19:14:31.527007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.527974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.527990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.528004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.528018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.031 [2024-07-25 19:14:31.528032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.528047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.031 [2024-07-25 19:14:31.528061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.528076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.031 [2024-07-25 19:14:31.528090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.528111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.031 [2024-07-25 19:14:31.528127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.528142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.031 [2024-07-25 19:14:31.528164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.528182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.031 [2024-07-25 19:14:31.528197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.528212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.031 [2024-07-25 19:14:31.528232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.031 [2024-07-25 19:14:31.528247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.528261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.528275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.528289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.528304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.528318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.528339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.528353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.528368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.528382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.528396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.528410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.528425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.528438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.528453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.528467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.528482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.528495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.528510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.528524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.528538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.528555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.528571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.528585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.528600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.528614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.528629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.528642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.528657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.528671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.528686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.528699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.528714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.528727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.528742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.528755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.528770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.528783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.528803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.528817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.528832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.528845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.528859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.528873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.528888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.528901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.528916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.528933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.528948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.528962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.528977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.528991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.529005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.529019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.529033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.529047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.529062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.529075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.529090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.529111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.529128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.529142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.529162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.529175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.529190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.529204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.529218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.529232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.529249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.529263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.032 [2024-07-25 19:14:31.529284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.032 [2024-07-25 19:14:31.529298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.529317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.033 [2024-07-25 19:14:31.529331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.529345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.033 [2024-07-25 19:14:31.529360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.529375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.033 [2024-07-25 19:14:31.529388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.529403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.033 [2024-07-25 19:14:31.529417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.529431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.033 [2024-07-25 19:14:31.529444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.529459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.033 [2024-07-25 19:14:31.529473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.529488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.033 [2024-07-25 19:14:31.529501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.529516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.033 [2024-07-25 19:14:31.529529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.529544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.033 [2024-07-25 19:14:31.529557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.529572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.033 [2024-07-25 19:14:31.529585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.529600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.033 [2024-07-25 19:14:31.529613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.529628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.033 [2024-07-25 19:14:31.529642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.529656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.033 [2024-07-25 19:14:31.529673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.529688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.033 [2024-07-25 19:14:31.529702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.529717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.033 [2024-07-25 19:14:31.529731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.529746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.033 [2024-07-25 19:14:31.529760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.529775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.033 [2024-07-25 19:14:31.529788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.529802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.033 [2024-07-25 19:14:31.529815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.529830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.033 [2024-07-25 19:14:31.529843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.529858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.033 [2024-07-25 19:14:31.529871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.529886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.033 [2024-07-25 19:14:31.529899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.529914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.033 [2024-07-25 19:14:31.529927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.529941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.033 [2024-07-25 19:14:31.529955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.529969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.033 [2024-07-25 19:14:31.529983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.529997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.033 [2024-07-25 19:14:31.530011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.530029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.033 [2024-07-25 19:14:31.530043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.530058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.033 [2024-07-25 19:14:31.530072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.530086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.033 [2024-07-25 19:14:31.530100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.530123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.033 [2024-07-25 19:14:31.530137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.530152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.033 [2024-07-25 19:14:31.530165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.530180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.033 [2024-07-25 19:14:31.530194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.530209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.033 [2024-07-25 19:14:31.530224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.530239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.033 [2024-07-25 19:14:31.530252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.530267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.033 [2024-07-25 19:14:31.530281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.530295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.033 [2024-07-25 19:14:31.530309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.530324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.033 [2024-07-25 19:14:31.530338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.033 [2024-07-25 19:14:31.530353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:31.530367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:31.530381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:31.530395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:31.530414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:31.530428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:31.530442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:31.530456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:31.530471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:31.530485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:31.530499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:31.530513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:31.530527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:31.530541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:31.530555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:31.530569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:31.530584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:31.530597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:31.530612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:31.530626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:31.530641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:31.530655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:31.530669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:31.530683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:31.530697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:31.530711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:31.530725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:31.530739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:31.530753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:31.530774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:31.530789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:31.530803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:31.530817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:31.530831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:31.530860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.034 [2024-07-25 19:14:31.530875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.034 [2024-07-25 19:14:31.530887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84408 len:8 PRP1 0x0 PRP2 0x0 00:23:54.034 [2024-07-25 19:14:31.530900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:31.530964] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd4ac10 was disconnected and freed. reset controller. 00:23:54.034 [2024-07-25 19:14:31.530982] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:54.034 [2024-07-25 19:14:31.531016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.034 [2024-07-25 19:14:31.531034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:31.531048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.034 [2024-07-25 19:14:31.531061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:31.531075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.034 [2024-07-25 19:14:31.531088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:31.531107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.034 [2024-07-25 19:14:31.531122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:31.531135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:54.034 [2024-07-25 19:14:31.531204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2d0f0 (9): Bad file descriptor 00:23:54.034 [2024-07-25 19:14:31.534468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:54.034 [2024-07-25 19:14:31.570127] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:54.034 [2024-07-25 19:14:35.236425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:35.236468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:35.236498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:35.236513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:35.236536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:35.236551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:35.236567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:35.236580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:35.236595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:35.236609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:35.236624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:35.236638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:35.236653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:35.236666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:35.236681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:35.236694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:35.236709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:35.236722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:35.236753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:35.236766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:35.236781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:35.236794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.034 [2024-07-25 19:14:35.236808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.034 [2024-07-25 19:14:35.236822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.236836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.035 [2024-07-25 19:14:35.236849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.236863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.035 [2024-07-25 19:14:35.236876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.236891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.035 [2024-07-25 19:14:35.236908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.236923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.035 [2024-07-25 19:14:35.236936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.236951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.035 [2024-07-25 19:14:35.236965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.236979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.035 [2024-07-25 19:14:35.236992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.237007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.035 [2024-07-25 19:14:35.237020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.237035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.035 [2024-07-25 19:14:35.237048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.237063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.035 [2024-07-25 19:14:35.237076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.237091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.035 [2024-07-25 19:14:35.237110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.237144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.035 [2024-07-25 19:14:35.237159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.237174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.035 [2024-07-25 19:14:35.237188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.237203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.035 [2024-07-25 19:14:35.237216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.237231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.035 [2024-07-25 19:14:35.237245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.237260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.035 [2024-07-25 19:14:35.237274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.237292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.035 [2024-07-25 19:14:35.237306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.237321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.035 [2024-07-25 19:14:35.237335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.237352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.035 [2024-07-25 19:14:35.237366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.237380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.035 [2024-07-25 19:14:35.237394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.237409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.035 [2024-07-25 19:14:35.237423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.237438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.035 [2024-07-25 19:14:35.237453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.237468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.035 [2024-07-25 19:14:35.237481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.237496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.035 [2024-07-25 19:14:35.237510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.237525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.035 [2024-07-25 19:14:35.237538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.237553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.035 [2024-07-25 19:14:35.237567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.237581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.035 [2024-07-25 19:14:35.237595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.237609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.035 [2024-07-25 19:14:35.237622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.035 [2024-07-25 19:14:35.237637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.036 [2024-07-25 19:14:35.237651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.237669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.036 [2024-07-25 19:14:35.237683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.237699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.036 [2024-07-25 19:14:35.237712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.237727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.036 [2024-07-25 19:14:35.237741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.237756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.036 [2024-07-25 19:14:35.237769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.237784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.036 [2024-07-25 19:14:35.237798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.237813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.237826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.237841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.237855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.237870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.237883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.237898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.237912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.237927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.237941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.237956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.237969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.237984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.237998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.238012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.238030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.238046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.238059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.238074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.238088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.238109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.238125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.238140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.238154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.238169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.238182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.238197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.238210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.238225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:92304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.238238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.238253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.238267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.238282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.238295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.238310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.238323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.238338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.238352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.238367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.238381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.238400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.238415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.238430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:92360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.238444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.238459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.238472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.238487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.238501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.238515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.238528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.238543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.238557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.238572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.238585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.238600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.238614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.238629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.238643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.238657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.238671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.036 [2024-07-25 19:14:35.238686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.036 [2024-07-25 19:14:35.238699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.238714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.238728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.238743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.238759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.238775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.238789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.238804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.238817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.238832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.238846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.238861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.238875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.238889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.238903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.238918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.238932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.238946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.238960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.238974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.238988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.239003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.239017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.239032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.239046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.239061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.239075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.239089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.239109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.239130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.239145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.239160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.239174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.239189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.239202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.239218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.239231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.239246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.239260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.239275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.239289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.239305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.239326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.239342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.239356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.239371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.239384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.239399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.239413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.239428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.239442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.239457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.239471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.239486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.239500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.239519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.037 [2024-07-25 19:14:35.239533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.239549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.037 [2024-07-25 19:14:35.239562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.239577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.037 [2024-07-25 19:14:35.239591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.239606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.037 [2024-07-25 19:14:35.239619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.239634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.037 [2024-07-25 19:14:35.239647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.239662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.037 [2024-07-25 19:14:35.239676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.239690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.037 [2024-07-25 19:14:35.239704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.037 [2024-07-25 19:14:35.239719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.037 [2024-07-25 19:14:35.239732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:35.239747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.038 [2024-07-25 19:14:35.239760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:35.239775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.038 [2024-07-25 19:14:35.239794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:35.239809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.038 [2024-07-25 19:14:35.239823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:35.239838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.038 [2024-07-25 19:14:35.239851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:35.239869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.038 [2024-07-25 19:14:35.239887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:35.239903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.038 [2024-07-25 19:14:35.239918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:35.239933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.038 [2024-07-25 19:14:35.239947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:35.239962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.038 [2024-07-25 19:14:35.239977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:35.239992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.038 [2024-07-25 19:14:35.240006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:35.240021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.038 [2024-07-25 19:14:35.240035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:35.240051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.038 [2024-07-25 19:14:35.240065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:35.240080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.038 [2024-07-25 19:14:35.240094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:35.240116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.038 [2024-07-25 19:14:35.240131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:35.240147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.038 [2024-07-25 19:14:35.240161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:35.240176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.038 [2024-07-25 19:14:35.240190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:35.240206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.038 [2024-07-25 19:14:35.240220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:35.240234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5bd40 is same with the state(5) to be set 00:23:54.038 [2024-07-25 19:14:35.240252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.038 [2024-07-25 19:14:35.240264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.038 [2024-07-25 19:14:35.240280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93016 len:8 PRP1 0x0 PRP2 0x0 00:23:54.038 [2024-07-25 19:14:35.240293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:35.240357] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd5bd40 was disconnected and freed. reset controller. 00:23:54.038 [2024-07-25 19:14:35.240377] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:54.038 [2024-07-25 19:14:35.240412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.038 [2024-07-25 19:14:35.240431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:35.240446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.038 [2024-07-25 19:14:35.240460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:35.240474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.038 [2024-07-25 19:14:35.240487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:35.240501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.038 [2024-07-25 19:14:35.240514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:35.240527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:54.038 [2024-07-25 19:14:35.240567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2d0f0 (9): Bad file descriptor 00:23:54.038 [2024-07-25 19:14:35.243824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:54.038 [2024-07-25 19:14:35.274272] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:54.038 [2024-07-25 19:14:39.788849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.038 [2024-07-25 19:14:39.788894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:39.788924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.038 [2024-07-25 19:14:39.788940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:39.788958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:26080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.038 [2024-07-25 19:14:39.788972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:39.788987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:26088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.038 [2024-07-25 19:14:39.789001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:39.789016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.038 [2024-07-25 19:14:39.789030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:39.789052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:26104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.038 [2024-07-25 19:14:39.789067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:39.789082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.038 [2024-07-25 19:14:39.789096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:39.789121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.038 [2024-07-25 19:14:39.789136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:39.789151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.038 [2024-07-25 19:14:39.789165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:39.789180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.038 [2024-07-25 19:14:39.789195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.038 [2024-07-25 19:14:39.789210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.039 [2024-07-25 19:14:39.789223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.039 [2024-07-25 19:14:39.789251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.039 [2024-07-25 19:14:39.789281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.039 [2024-07-25 19:14:39.789308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.039 [2024-07-25 19:14:39.789337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.039 [2024-07-25 19:14:39.789365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.039 [2024-07-25 19:14:39.789393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.039 [2024-07-25 19:14:39.789427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.039 [2024-07-25 19:14:39.789457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.039 [2024-07-25 19:14:39.789486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.039 [2024-07-25 19:14:39.789515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.039 [2024-07-25 19:14:39.789544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.039 [2024-07-25 19:14:39.789572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.039 [2024-07-25 19:14:39.789601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.039 [2024-07-25 19:14:39.789630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.039 [2024-07-25 19:14:39.789658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.039 [2024-07-25 19:14:39.789687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.039 [2024-07-25 19:14:39.789714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.039 [2024-07-25 19:14:39.789742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.039 [2024-07-25 19:14:39.789770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.039 [2024-07-25 19:14:39.789802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.039 [2024-07-25 19:14:39.789831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.039 [2024-07-25 19:14:39.789859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.039 [2024-07-25 19:14:39.789889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.039 [2024-07-25 19:14:39.789917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.039 [2024-07-25 19:14:39.789946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.039 [2024-07-25 19:14:39.789974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.789989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.039 [2024-07-25 19:14:39.790002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.790017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.039 [2024-07-25 19:14:39.790031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.790045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.039 [2024-07-25 19:14:39.790059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.790073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.039 [2024-07-25 19:14:39.790087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.790108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.039 [2024-07-25 19:14:39.790124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.790140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.039 [2024-07-25 19:14:39.790154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.790176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.039 [2024-07-25 19:14:39.790191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.039 [2024-07-25 19:14:39.790205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.040 [2024-07-25 19:14:39.790219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.040 [2024-07-25 19:14:39.790248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.040 [2024-07-25 19:14:39.790276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.040 [2024-07-25 19:14:39.790305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.040 [2024-07-25 19:14:39.790333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.040 [2024-07-25 19:14:39.790362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.040 [2024-07-25 19:14:39.790389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.040 [2024-07-25 19:14:39.790418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.040 [2024-07-25 19:14:39.790446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.040 [2024-07-25 19:14:39.790474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.040 [2024-07-25 19:14:39.790502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.040 [2024-07-25 19:14:39.790533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.040 [2024-07-25 19:14:39.790563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.040 [2024-07-25 19:14:39.790591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.040 [2024-07-25 19:14:39.790619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.040 [2024-07-25 19:14:39.790648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.040 [2024-07-25 19:14:39.790676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.040 [2024-07-25 19:14:39.790705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.040 [2024-07-25 19:14:39.790733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.040 [2024-07-25 19:14:39.790761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.040 [2024-07-25 19:14:39.790789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.040 [2024-07-25 19:14:39.790826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.040 [2024-07-25 19:14:39.790855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.040 [2024-07-25 19:14:39.790883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.040 [2024-07-25 19:14:39.790914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.040 [2024-07-25 19:14:39.790943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.040 [2024-07-25 19:14:39.790972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.790986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.040 [2024-07-25 19:14:39.790999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.040 [2024-07-25 19:14:39.791014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.041 [2024-07-25 19:14:39.791027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.041 [2024-07-25 19:14:39.791055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.041 [2024-07-25 19:14:39.791083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.041 [2024-07-25 19:14:39.791118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.041 [2024-07-25 19:14:39.791147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.041 [2024-07-25 19:14:39.791176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.041 [2024-07-25 19:14:39.791204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.041 [2024-07-25 19:14:39.791231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.041 [2024-07-25 19:14:39.791260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.041 [2024-07-25 19:14:39.791299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.041 [2024-07-25 19:14:39.791327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.041 [2024-07-25 19:14:39.791355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.041 [2024-07-25 19:14:39.791383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.041 [2024-07-25 19:14:39.791411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.041 [2024-07-25 19:14:39.791439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.041 [2024-07-25 19:14:39.791467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.041 [2024-07-25 19:14:39.791496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.041 [2024-07-25 19:14:39.791524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.041 [2024-07-25 19:14:39.791552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.041 [2024-07-25 19:14:39.791580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.041 [2024-07-25 19:14:39.791608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.041 [2024-07-25 19:14:39.791639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.041 [2024-07-25 19:14:39.791668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.041 [2024-07-25 19:14:39.791696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.041 [2024-07-25 19:14:39.791742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26832 len:8 PRP1 0x0 PRP2 0x0 00:23:54.041 [2024-07-25 19:14:39.791756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.041 [2024-07-25 19:14:39.791785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.041 [2024-07-25 19:14:39.791796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26840 len:8 PRP1 0x0 PRP2 0x0 00:23:54.041 [2024-07-25 19:14:39.791808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.041 [2024-07-25 19:14:39.791832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.041 [2024-07-25 19:14:39.791843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26848 len:8 PRP1 0x0 PRP2 0x0 00:23:54.041 [2024-07-25 19:14:39.791856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.041 [2024-07-25 19:14:39.791879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.041 [2024-07-25 19:14:39.791890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26856 len:8 PRP1 0x0 PRP2 0x0 00:23:54.041 [2024-07-25 19:14:39.791903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.041 [2024-07-25 19:14:39.791926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.041 [2024-07-25 19:14:39.791937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26864 len:8 PRP1 0x0 PRP2 0x0 00:23:54.041 [2024-07-25 19:14:39.791949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.791962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.041 [2024-07-25 19:14:39.791973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.041 [2024-07-25 19:14:39.791984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26872 len:8 PRP1 0x0 PRP2 0x0 00:23:54.041 [2024-07-25 19:14:39.791996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.041 [2024-07-25 19:14:39.792008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.041 [2024-07-25 19:14:39.792019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.042 [2024-07-25 19:14:39.792034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26880 len:8 PRP1 0x0 PRP2 0x0 00:23:54.042 [2024-07-25 19:14:39.792047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.042 [2024-07-25 19:14:39.792060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.042 [2024-07-25 19:14:39.792070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.042 [2024-07-25 19:14:39.792081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26888 len:8 PRP1 0x0 PRP2 0x0 00:23:54.042 [2024-07-25 19:14:39.792094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.042 [2024-07-25 19:14:39.792113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.042 [2024-07-25 19:14:39.792125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.042 [2024-07-25 19:14:39.792136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26896 len:8 PRP1 0x0 PRP2 0x0 00:23:54.042 [2024-07-25 19:14:39.792150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.042 [2024-07-25 19:14:39.792163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.042 [2024-07-25 19:14:39.792174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.042 [2024-07-25 19:14:39.792185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26904 len:8 PRP1 0x0 PRP2 0x0 00:23:54.042 [2024-07-25 19:14:39.792197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.042 [2024-07-25 19:14:39.792210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.042 [2024-07-25 19:14:39.792221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.042 [2024-07-25 19:14:39.792232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26912 len:8 PRP1 0x0 PRP2 0x0 00:23:54.042 [2024-07-25 19:14:39.792244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.042 [2024-07-25 19:14:39.792256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.042 [2024-07-25 19:14:39.792267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.042 [2024-07-25 19:14:39.792277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26920 len:8 PRP1 0x0 PRP2 0x0 00:23:54.042 [2024-07-25 19:14:39.792290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.042 [2024-07-25 19:14:39.792302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.042 [2024-07-25 19:14:39.792312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.042 [2024-07-25 19:14:39.792323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26928 len:8 PRP1 0x0 PRP2 0x0 00:23:54.042 [2024-07-25 19:14:39.792335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.042 [2024-07-25 19:14:39.792348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.042 [2024-07-25 19:14:39.792358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.042 [2024-07-25 19:14:39.792369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26936 len:8 PRP1 0x0 PRP2 0x0 00:23:54.042 [2024-07-25 19:14:39.792381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.042 [2024-07-25 19:14:39.792397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.042 [2024-07-25 19:14:39.792408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.042 [2024-07-25 19:14:39.792419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26944 len:8 PRP1 0x0 PRP2 0x0 00:23:54.042 [2024-07-25 19:14:39.792432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.042 [2024-07-25 19:14:39.792444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.042 [2024-07-25 19:14:39.792455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.042 [2024-07-25 19:14:39.792466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26952 len:8 PRP1 0x0 PRP2 0x0 00:23:54.042 [2024-07-25 19:14:39.792478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.042 [2024-07-25 19:14:39.792491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.042 [2024-07-25 19:14:39.792501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.042 [2024-07-25 19:14:39.792511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26960 len:8 PRP1 0x0 PRP2 0x0 00:23:54.042 [2024-07-25 19:14:39.792524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.042 [2024-07-25 19:14:39.792538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.042 [2024-07-25 19:14:39.792548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.042 [2024-07-25 19:14:39.792559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26968 len:8 PRP1 0x0 PRP2 0x0 00:23:54.042 [2024-07-25 19:14:39.792571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.042 [2024-07-25 19:14:39.792584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.042 [2024-07-25 19:14:39.792595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.042 [2024-07-25 19:14:39.792605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26976 len:8 PRP1 0x0 PRP2 0x0 00:23:54.042 [2024-07-25 19:14:39.792618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.042 [2024-07-25 19:14:39.792631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.042 [2024-07-25 19:14:39.792641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.042 [2024-07-25 19:14:39.792654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26984 len:8 PRP1 0x0 PRP2 0x0 00:23:54.042 [2024-07-25 19:14:39.792667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.042 [2024-07-25 19:14:39.792681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.042 [2024-07-25 19:14:39.792692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.042 [2024-07-25 19:14:39.792703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26992 len:8 PRP1 0x0 PRP2 0x0 00:23:54.042 [2024-07-25 19:14:39.792716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.042 [2024-07-25 19:14:39.792729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.042 [2024-07-25 19:14:39.792740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.042 [2024-07-25 19:14:39.792751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27000 len:8 PRP1 0x0 PRP2 0x0 00:23:54.042 [2024-07-25 19:14:39.792768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.042 [2024-07-25 19:14:39.792781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.042 [2024-07-25 19:14:39.792792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.042 [2024-07-25 19:14:39.792803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27008 len:8 PRP1 0x0 PRP2 0x0 00:23:54.042 [2024-07-25 19:14:39.792816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.042 [2024-07-25 19:14:39.792829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.042 [2024-07-25 19:14:39.792840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.042 [2024-07-25 19:14:39.792852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27016 len:8 PRP1 0x0 PRP2 0x0 00:23:54.042 [2024-07-25 19:14:39.792864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.042 [2024-07-25 19:14:39.792877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.042 [2024-07-25 19:14:39.792888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.042 [2024-07-25 19:14:39.792899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27024 len:8 PRP1 0x0 PRP2 0x0 00:23:54.042 [2024-07-25 19:14:39.792912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.042 [2024-07-25 19:14:39.792925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.042 [2024-07-25 19:14:39.792937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.042 [2024-07-25 19:14:39.792948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27032 len:8 PRP1 0x0 PRP2 0x0 00:23:54.042 [2024-07-25 19:14:39.792960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.042 [2024-07-25 19:14:39.792973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.042 [2024-07-25 19:14:39.792984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.042 [2024-07-25 19:14:39.792995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27040 len:8 PRP1 0x0 PRP2 0x0 00:23:54.043 [2024-07-25 19:14:39.793008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.043 [2024-07-25 19:14:39.793021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.043 [2024-07-25 19:14:39.793032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.043 [2024-07-25 19:14:39.793043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27048 len:8 PRP1 0x0 PRP2 0x0 00:23:54.043 [2024-07-25 19:14:39.793056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.043 [2024-07-25 19:14:39.793069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.043 [2024-07-25 19:14:39.793080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.043 [2024-07-25 19:14:39.793091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27056 len:8 PRP1 0x0 PRP2 0x0 00:23:54.043 [2024-07-25 19:14:39.793110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.043 [2024-07-25 19:14:39.793125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.043 [2024-07-25 19:14:39.793137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.043 [2024-07-25 19:14:39.793151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27064 len:8 PRP1 0x0 PRP2 0x0 00:23:54.043 [2024-07-25 19:14:39.793165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.043 [2024-07-25 19:14:39.793178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.043 [2024-07-25 19:14:39.793189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.043 [2024-07-25 19:14:39.793201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27072 len:8 PRP1 0x0 PRP2 0x0 00:23:54.043 [2024-07-25 19:14:39.793214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.043 [2024-07-25 19:14:39.793227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.043 [2024-07-25 19:14:39.793238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.043 [2024-07-25 19:14:39.793249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27080 len:8 PRP1 0x0 PRP2 0x0 00:23:54.043 [2024-07-25 19:14:39.793262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.043 [2024-07-25 19:14:39.793324] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd5db40 was disconnected and freed. reset controller. 00:23:54.043 [2024-07-25 19:14:39.793342] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:54.043 [2024-07-25 19:14:39.793377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.043 [2024-07-25 19:14:39.793401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.043 [2024-07-25 19:14:39.793417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.043 [2024-07-25 19:14:39.793430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.043 [2024-07-25 19:14:39.793444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.043 [2024-07-25 19:14:39.793457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.043 [2024-07-25 19:14:39.793471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.043 [2024-07-25 19:14:39.793484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.043 [2024-07-25 19:14:39.793498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:54.043 [2024-07-25 19:14:39.793537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2d0f0 (9): Bad file descriptor 00:23:54.043 [2024-07-25 19:14:39.796759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:54.043 [2024-07-25 19:14:39.834704] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:54.043 00:23:54.043 Latency(us) 00:23:54.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.043 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:54.043 Verification LBA range: start 0x0 length 0x4000 00:23:54.043 NVMe0n1 : 15.02 8768.44 34.25 253.93 0.00 14159.35 788.86 15243.19 00:23:54.043 =================================================================================================================== 00:23:54.043 Total : 8768.44 34.25 253.93 0.00 14159.35 788.86 15243.19 00:23:54.043 Received shutdown signal, test time was about 15.000000 seconds 00:23:54.043 00:23:54.043 Latency(us) 00:23:54.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.043 =================================================================================================================== 00:23:54.043 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:54.043 19:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:54.043 19:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:54.043 19:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:54.043 19:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=974314 00:23:54.043 19:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:54.043 19:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 974314 /var/tmp/bdevperf.sock 00:23:54.043 19:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 974314 ']' 00:23:54.043 19:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.043 19:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:54.043 19:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.043 19:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:54.043 19:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:54.043 19:14:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:54.043 19:14:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:54.043 19:14:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:54.043 [2024-07-25 19:14:46.305458] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:54.043 19:14:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:54.301 [2024-07-25 19:14:46.542122] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:54.301 19:14:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:54.559 NVMe0n1 00:23:54.559 19:14:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:54.816 00:23:54.816 19:14:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:55.381 00:23:55.381 19:14:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:55.381 19:14:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:55.639 19:14:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:55.896 19:14:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:59.216 19:14:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:59.216 19:14:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:59.216 19:14:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=974986 00:23:59.216 19:14:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:59.216 19:14:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 974986 00:24:00.150 0 00:24:00.150 19:14:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:00.150 [2024-07-25 19:14:45.773928] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:00.150 [2024-07-25 19:14:45.774012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid974314 ] 00:24:00.150 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.150 [2024-07-25 19:14:45.842785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.150 [2024-07-25 19:14:45.950533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.150 [2024-07-25 19:14:48.126234] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:00.150 [2024-07-25 19:14:48.126320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.150 [2024-07-25 19:14:48.126342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.150 [2024-07-25 19:14:48.126361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.150 [2024-07-25 19:14:48.126375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.150 [2024-07-25 19:14:48.126396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.150 [2024-07-25 19:14:48.126410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.150 [2024-07-25 19:14:48.126424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.150 [2024-07-25 19:14:48.126438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.150 [2024-07-25 19:14:48.126452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.150 [2024-07-25 19:14:48.126495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.150 [2024-07-25 19:14:48.126526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25230f0 (9): Bad file descriptor 00:24:00.150 [2024-07-25 19:14:48.175130] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:00.150 Running I/O for 1 seconds... 00:24:00.150 00:24:00.150 Latency(us) 00:24:00.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.150 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:00.150 Verification LBA range: start 0x0 length 0x4000 00:24:00.150 NVMe0n1 : 1.01 8811.81 34.42 0.00 0.00 14441.12 2402.99 13204.29 00:24:00.150 =================================================================================================================== 00:24:00.150 Total : 8811.81 34.42 0.00 0.00 14441.12 2402.99 13204.29 00:24:00.150 19:14:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:00.150 19:14:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:00.408 19:14:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:00.666 19:14:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:00.666 19:14:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:00.924 19:14:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:01.487 19:14:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:04.764 19:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:04.764 19:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:04.764 19:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 974314 00:24:04.764 19:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 974314 ']' 00:24:04.764 19:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 974314 00:24:04.764 19:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:04.764 19:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:04.764 19:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 974314 00:24:04.764 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:04.764 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:04.764 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 974314' 00:24:04.764 killing process with pid 974314 00:24:04.764 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 974314 00:24:04.764 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 974314 00:24:05.022 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:05.022 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:05.280 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:05.280 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:05.280 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:05.280 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:05.280 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:24:05.280 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:05.280 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:24:05.280 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:05.280 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:05.280 rmmod nvme_tcp 00:24:05.280 rmmod nvme_fabrics 00:24:05.280 rmmod nvme_keyring 00:24:05.280 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:05.280 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:24:05.280 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:24:05.280 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 971915 ']' 00:24:05.280 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 971915 00:24:05.280 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 971915 ']' 00:24:05.280 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 971915 00:24:05.280 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:05.280 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:05.280 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 971915 00:24:05.280 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:05.280 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:05.280 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 971915' 00:24:05.280 killing process with pid 971915 00:24:05.280 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 971915 00:24:05.280 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 971915 00:24:05.540 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:05.540 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:05.540 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:05.540 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:05.540 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:05.540 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.540 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.540 19:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.074 19:14:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:08.074 00:24:08.074 real 0m36.995s 00:24:08.074 user 2m9.583s 00:24:08.074 sys 0m6.311s 00:24:08.074 19:14:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:08.074 19:14:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:08.074 ************************************ 00:24:08.074 END TEST nvmf_failover 00:24:08.074 ************************************ 00:24:08.074 19:14:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:08.074 19:14:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:08.074 19:14:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:08.074 19:14:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.074 ************************************ 00:24:08.074 START TEST nvmf_host_discovery 00:24:08.074 ************************************ 00:24:08.074 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:08.074 * Looking for test storage... 00:24:08.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:08.074 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.074 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:08.074 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.074 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:24:08.075 19:15:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:09.978 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:09.978 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.978 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:09.978 Found net devices under 0000:09:00.0: cvl_0_0 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:09.979 Found net devices under 0000:09:00.1: cvl_0_1 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:09.979 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.237 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.237 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.237 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:10.237 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.237 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.237 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.237 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:10.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:24:10.237 00:24:10.237 --- 10.0.0.2 ping statistics --- 00:24:10.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.237 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:24:10.237 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:24:10.237 00:24:10.237 --- 10.0.0.1 ping statistics --- 00:24:10.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.237 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:24:10.237 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.237 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:24:10.237 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:10.237 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.237 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:10.237 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:10.237 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.237 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:10.237 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:10.237 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:10.237 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:10.237 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:10.237 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.237 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=978114 00:24:10.237 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:10.237 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 978114 00:24:10.238 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 978114 ']' 00:24:10.238 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.238 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:10.238 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.238 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:10.238 19:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.238 [2024-07-25 19:15:02.594360] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:10.238 [2024-07-25 19:15:02.594438] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.238 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.238 [2024-07-25 19:15:02.675844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.495 [2024-07-25 19:15:02.793404] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.496 [2024-07-25 19:15:02.793467] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.496 [2024-07-25 19:15:02.793482] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.496 [2024-07-25 19:15:02.793495] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.496 [2024-07-25 19:15:02.793514] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.496 [2024-07-25 19:15:02.793551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.429 [2024-07-25 19:15:03.583032] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.429 [2024-07-25 19:15:03.591208] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.429 null0 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.429 null1 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=978265 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 978265 /tmp/host.sock 00:24:11.429 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 978265 ']' 00:24:11.430 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:24:11.430 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:11.430 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:11.430 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:11.430 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:11.430 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.430 [2024-07-25 19:15:03.665014] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:11.430 [2024-07-25 19:15:03.665093] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid978265 ] 00:24:11.430 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.430 [2024-07-25 19:15:03.735531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.430 [2024-07-25 19:15:03.854003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.688 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:11.688 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:24:11.688 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:11.688 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:11.688 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.688 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.688 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.688 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:11.688 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.688 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.688 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.688 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:11.688 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:11.688 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:11.688 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.688 19:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:11.688 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.947 [2024-07-25 19:15:04.277084] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:11.947 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:11.948 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.948 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:11.948 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.948 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.948 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:11.948 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:11.948 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:11.948 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:11.948 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:11.948 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.948 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.948 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.948 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:11.948 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:11.948 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:11.948 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:11.948 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:11.948 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:11.948 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:11.948 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.948 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:11.948 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.948 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:11.948 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:11.948 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.205 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:24:12.205 19:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:12.770 [2024-07-25 19:15:05.052160] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:12.770 [2024-07-25 19:15:05.052184] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:12.770 [2024-07-25 19:15:05.052216] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:12.770 [2024-07-25 19:15:05.139526] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:13.028 [2024-07-25 19:15:05.244324] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:13.028 [2024-07-25 19:15:05.244346] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:13.028 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:13.028 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:13.028 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:13.028 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:13.028 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.028 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:13.028 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.028 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:13.028 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:13.028 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.028 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.028 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:13.028 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:13.028 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:13.028 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:13.028 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:13.028 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:13.028 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:13.028 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.028 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:13.028 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.028 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:13.028 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.028 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:13.286 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.286 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:13.286 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:13.286 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:13.286 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:13.286 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.287 [2024-07-25 19:15:05.721521] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:13.287 [2024-07-25 19:15:05.722552] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:13.287 [2024-07-25 19:15:05.722588] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:13.287 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.545 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.545 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:13.545 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:13.545 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:13.545 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:13.545 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:13.545 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:13.545 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:13.545 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.545 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:13.545 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.545 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.545 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:13.545 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:13.545 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.545 [2024-07-25 19:15:05.809076] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:13.545 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:13.545 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:13.545 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:13.545 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:13.545 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:13.545 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:13.546 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:13.546 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:13.546 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:13.546 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:13.546 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.546 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:13.546 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.546 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:13.546 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.546 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:13.546 19:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:13.546 [2024-07-25 19:15:05.914796] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:13.546 [2024-07-25 19:15:05.914824] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:13.546 [2024-07-25 19:15:05.914834] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:14.478 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:14.478 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:14.478 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:14.478 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:14.478 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:14.478 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.478 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:14.478 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.478 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:14.478 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.478 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:14.478 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:14.478 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:14.478 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:14.478 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:14.478 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:14.478 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:14.478 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:14.478 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:14.478 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:14.478 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:14.478 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:14.478 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.478 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.478 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.738 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:14.738 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:14.738 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:14.738 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:14.738 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:14.738 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.738 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.738 [2024-07-25 19:15:06.958191] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:14.738 [2024-07-25 19:15:06.958245] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:14.738 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.738 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:14.738 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:14.738 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:14.738 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:14.738 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:14.738 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:14.738 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:14.738 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:14.738 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.738 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.738 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:14.738 [2024-07-25 19:15:06.965657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.738 [2024-07-25 19:15:06.965693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.738 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:14.738 [2024-07-25 19:15:06.965713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.738 [2024-07-25 19:15:06.965730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.738 [2024-07-25 19:15:06.965746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.738 [2024-07-25 19:15:06.965763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.738 [2024-07-25 19:15:06.965780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.738 [2024-07-25 19:15:06.965796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.738 [2024-07-25 19:15:06.965810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f4c20 is same with the state(5) to be set 00:24:14.738 [2024-07-25 19:15:06.975662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f4c20 (9): Bad file descriptor 00:24:14.738 19:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.738 [2024-07-25 19:15:06.985710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:14.738 [2024-07-25 19:15:06.985986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.738 [2024-07-25 19:15:06.986018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f4c20 with addr=10.0.0.2, port=4420 00:24:14.738 [2024-07-25 19:15:06.986037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f4c20 is same with the state(5) to be set 00:24:14.738 [2024-07-25 19:15:06.986062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f4c20 (9): Bad file descriptor 00:24:14.738 [2024-07-25 19:15:06.986085] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:14.738 [2024-07-25 19:15:06.986100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:14.738 [2024-07-25 19:15:06.986152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:14.738 [2024-07-25 19:15:06.986179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.738 [2024-07-25 19:15:06.995795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:14.738 [2024-07-25 19:15:06.996113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.738 [2024-07-25 19:15:06.996157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f4c20 with addr=10.0.0.2, port=4420 00:24:14.738 [2024-07-25 19:15:06.996173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f4c20 is same with the state(5) to be set 00:24:14.738 [2024-07-25 19:15:06.996210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f4c20 (9): Bad file descriptor 00:24:14.738 [2024-07-25 19:15:06.996230] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:14.738 [2024-07-25 19:15:06.996244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:14.738 [2024-07-25 19:15:06.996256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:14.738 [2024-07-25 19:15:06.996274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.738 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.738 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:14.738 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:14.738 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:14.738 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:14.738 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:14.738 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:14.738 [2024-07-25 19:15:07.005875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:14.738 [2024-07-25 19:15:07.006168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.738 [2024-07-25 19:15:07.006197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f4c20 with addr=10.0.0.2, port=4420 00:24:14.738 [2024-07-25 19:15:07.006213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f4c20 is same with the state(5) to be set 00:24:14.738 [2024-07-25 19:15:07.006234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f4c20 (9): Bad file descriptor 00:24:14.738 [2024-07-25 19:15:07.006254] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:14.738 [2024-07-25 19:15:07.006268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:14.738 [2024-07-25 19:15:07.006281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:14.738 [2024-07-25 19:15:07.006300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.738 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:14.738 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:14.738 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:14.738 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.738 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.738 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:14.738 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:14.738 [2024-07-25 19:15:07.015958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:14.738 [2024-07-25 19:15:07.016230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.738 [2024-07-25 19:15:07.016260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f4c20 with addr=10.0.0.2, port=4420 00:24:14.738 [2024-07-25 19:15:07.016276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f4c20 is same with the state(5) to be set 00:24:14.738 [2024-07-25 19:15:07.016298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f4c20 (9): Bad file descriptor 00:24:14.738 [2024-07-25 19:15:07.016318] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:14.738 [2024-07-25 19:15:07.016331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:14.738 [2024-07-25 19:15:07.016344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:14.738 [2024-07-25 19:15:07.016362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.738 [2024-07-25 19:15:07.026045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:14.738 [2024-07-25 19:15:07.026293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.738 [2024-07-25 19:15:07.026320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f4c20 with addr=10.0.0.2, port=4420 00:24:14.738 [2024-07-25 19:15:07.026336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f4c20 is same with the state(5) to be set 00:24:14.738 [2024-07-25 19:15:07.026357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f4c20 (9): Bad file descriptor 00:24:14.738 [2024-07-25 19:15:07.026377] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:14.738 [2024-07-25 19:15:07.026390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:14.739 [2024-07-25 19:15:07.026404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:14.739 [2024-07-25 19:15:07.026422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.739 [2024-07-25 19:15:07.036123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:14.739 [2024-07-25 19:15:07.036347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.739 [2024-07-25 19:15:07.036374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f4c20 with addr=10.0.0.2, port=4420 00:24:14.739 [2024-07-25 19:15:07.036389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f4c20 is same with the state(5) to be set 00:24:14.739 [2024-07-25 19:15:07.036410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f4c20 (9): Bad file descriptor 00:24:14.739 [2024-07-25 19:15:07.036430] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:14.739 [2024-07-25 19:15:07.036443] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:14.739 [2024-07-25 19:15:07.036472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:14.739 [2024-07-25 19:15:07.036492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.739 [2024-07-25 19:15:07.045929] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:14.739 [2024-07-25 19:15:07.045967] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:14.739 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.998 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:24:14.998 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:14.998 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:14.998 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:14.998 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:14.998 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:14.998 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:14.998 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:14.998 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:14.998 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:14.998 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:14.998 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:14.998 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.998 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.998 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.998 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:14.998 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:14.998 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:14.998 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:14.998 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:14.998 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.998 19:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.931 [2024-07-25 19:15:08.330328] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:15.932 [2024-07-25 19:15:08.330354] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:15.932 [2024-07-25 19:15:08.330375] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:16.189 [2024-07-25 19:15:08.416675] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:16.189 [2024-07-25 19:15:08.485073] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:16.189 [2024-07-25 19:15:08.485131] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:16.189 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.189 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:16.189 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:16.189 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:16.189 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:16.189 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:16.189 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:16.189 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:16.189 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:16.189 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.189 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.189 request: 00:24:16.189 { 00:24:16.189 "name": "nvme", 00:24:16.189 "trtype": "tcp", 00:24:16.189 "traddr": "10.0.0.2", 00:24:16.189 "adrfam": "ipv4", 00:24:16.189 "trsvcid": "8009", 00:24:16.189 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:16.189 "wait_for_attach": true, 00:24:16.189 "method": "bdev_nvme_start_discovery", 00:24:16.189 "req_id": 1 00:24:16.189 } 00:24:16.189 Got JSON-RPC error response 00:24:16.189 response: 00:24:16.189 { 00:24:16.189 "code": -17, 00:24:16.189 "message": "File exists" 00:24:16.189 } 00:24:16.189 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.190 request: 00:24:16.190 { 00:24:16.190 "name": "nvme_second", 00:24:16.190 "trtype": "tcp", 00:24:16.190 "traddr": "10.0.0.2", 00:24:16.190 "adrfam": "ipv4", 00:24:16.190 "trsvcid": "8009", 00:24:16.190 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:16.190 "wait_for_attach": true, 00:24:16.190 "method": "bdev_nvme_start_discovery", 00:24:16.190 "req_id": 1 00:24:16.190 } 00:24:16.190 Got JSON-RPC error response 00:24:16.190 response: 00:24:16.190 { 00:24:16.190 "code": -17, 00:24:16.190 "message": "File exists" 00:24:16.190 } 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:16.190 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:16.448 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.448 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:16.448 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:16.448 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:16.448 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:16.448 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:16.448 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:16.448 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:16.448 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:16.448 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:16.448 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.448 19:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.380 [2024-07-25 19:15:09.693109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.380 [2024-07-25 19:15:09.693168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f8030 with addr=10.0.0.2, port=8010 00:24:17.380 [2024-07-25 19:15:09.693200] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:17.380 [2024-07-25 19:15:09.693214] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:17.380 [2024-07-25 19:15:09.693227] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:18.311 [2024-07-25 19:15:10.695565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.311 [2024-07-25 19:15:10.695617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f8030 with addr=10.0.0.2, port=8010 00:24:18.311 [2024-07-25 19:15:10.695658] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:18.311 [2024-07-25 19:15:10.695685] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:18.311 [2024-07-25 19:15:10.695708] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:19.276 [2024-07-25 19:15:11.697779] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:19.276 request: 00:24:19.276 { 00:24:19.276 "name": "nvme_second", 00:24:19.276 "trtype": "tcp", 00:24:19.276 "traddr": "10.0.0.2", 00:24:19.276 "adrfam": "ipv4", 00:24:19.276 "trsvcid": "8010", 00:24:19.276 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:19.276 "wait_for_attach": false, 00:24:19.276 "attach_timeout_ms": 3000, 00:24:19.276 "method": "bdev_nvme_start_discovery", 00:24:19.276 "req_id": 1 00:24:19.276 } 00:24:19.276 Got JSON-RPC error response 00:24:19.276 response: 00:24:19.276 { 00:24:19.276 "code": -110, 00:24:19.276 "message": "Connection timed out" 00:24:19.276 } 00:24:19.276 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:19.276 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:19.276 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:19.276 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:19.276 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:19.276 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:19.276 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:19.276 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:19.276 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.276 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.276 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:19.276 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:19.276 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.534 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:19.534 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:19.534 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 978265 00:24:19.534 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:19.534 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:19.534 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:24:19.534 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:19.534 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:24:19.534 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:19.534 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:19.534 rmmod nvme_tcp 00:24:19.534 rmmod nvme_fabrics 00:24:19.534 rmmod nvme_keyring 00:24:19.534 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:19.535 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:24:19.535 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:24:19.535 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 978114 ']' 00:24:19.535 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 978114 00:24:19.535 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 978114 ']' 00:24:19.535 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 978114 00:24:19.535 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:24:19.535 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:19.535 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 978114 00:24:19.535 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:19.535 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:19.535 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 978114' 00:24:19.535 killing process with pid 978114 00:24:19.535 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 978114 00:24:19.535 19:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 978114 00:24:19.792 19:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:19.792 19:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:19.792 19:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:19.793 19:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:19.793 19:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:19.793 19:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.793 19:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:19.793 19:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.691 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:21.950 00:24:21.950 real 0m14.160s 00:24:21.950 user 0m19.943s 00:24:21.950 sys 0m3.178s 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.950 ************************************ 00:24:21.950 END TEST nvmf_host_discovery 00:24:21.950 ************************************ 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.950 ************************************ 00:24:21.950 START TEST nvmf_host_multipath_status 00:24:21.950 ************************************ 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:21.950 * Looking for test storage... 00:24:21.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:24:21.950 19:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:24.481 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:24.481 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:24:24.481 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:24.481 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:24.481 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:24.481 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:24.481 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:24.481 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:24:24.481 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:24.481 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:24:24.481 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:24:24.481 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:24:24.481 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:24:24.481 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:24:24.481 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:24:24.481 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:24.481 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:24.481 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:24.482 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:24.482 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:24.482 Found net devices under 0000:09:00.0: cvl_0_0 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:24.482 Found net devices under 0000:09:00.1: cvl_0_1 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:24.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:24.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:24:24.482 00:24:24.482 --- 10.0.0.2 ping statistics --- 00:24:24.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.482 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:24.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:24.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:24:24.482 00:24:24.482 --- 10.0.0.1 ping statistics --- 00:24:24.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.482 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:24.482 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:24.741 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=982212 00:24:24.741 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:24.741 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 982212 00:24:24.741 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 982212 ']' 00:24:24.742 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.742 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:24.742 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.742 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:24.742 19:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:24.742 [2024-07-25 19:15:17.003553] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:24.742 [2024-07-25 19:15:17.003642] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.742 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.742 [2024-07-25 19:15:17.076183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:24.742 [2024-07-25 19:15:17.182919] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.742 [2024-07-25 19:15:17.182975] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.742 [2024-07-25 19:15:17.183004] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.742 [2024-07-25 19:15:17.183015] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.742 [2024-07-25 19:15:17.183024] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.742 [2024-07-25 19:15:17.183122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.742 [2024-07-25 19:15:17.183142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.675 19:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:25.675 19:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:24:25.675 19:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:25.675 19:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:25.675 19:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:25.675 19:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.675 19:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=982212 00:24:25.675 19:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:25.934 [2024-07-25 19:15:18.203265] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.934 19:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:26.192 Malloc0 00:24:26.192 19:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:26.450 19:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:26.707 19:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:26.965 [2024-07-25 19:15:19.363987] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.965 19:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:27.223 [2024-07-25 19:15:19.620713] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:27.223 19:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=982511 00:24:27.223 19:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:27.223 19:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:27.223 19:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 982511 /var/tmp/bdevperf.sock 00:24:27.223 19:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 982511 ']' 00:24:27.223 19:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:27.223 19:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:27.223 19:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:27.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:27.223 19:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:27.223 19:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:27.789 19:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:27.789 19:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:24:27.789 19:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:27.789 19:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:28.354 Nvme0n1 00:24:28.354 19:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:28.919 Nvme0n1 00:24:28.920 19:15:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:28.920 19:15:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:30.818 19:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:30.818 19:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:31.077 19:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:31.335 19:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:32.706 19:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:32.706 19:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:32.706 19:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.706 19:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:32.706 19:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.706 19:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:32.706 19:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.706 19:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:32.993 19:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:32.993 19:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:32.993 19:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.993 19:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:33.251 19:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.251 19:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:33.251 19:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.251 19:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:33.508 19:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.508 19:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:33.508 19:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.508 19:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:33.766 19:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.766 19:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:33.766 19:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.766 19:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:33.766 19:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.766 19:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:33.766 19:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:34.331 19:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:34.331 19:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:35.703 19:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:35.703 19:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:35.703 19:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.703 19:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:35.703 19:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:35.703 19:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:35.703 19:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.703 19:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:35.960 19:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.960 19:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:35.960 19:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.960 19:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:36.218 19:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.218 19:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:36.218 19:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.218 19:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:36.476 19:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.477 19:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:36.477 19:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.477 19:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:36.734 19:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.734 19:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:36.734 19:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.734 19:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:36.992 19:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.992 19:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:36.992 19:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:37.250 19:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:37.509 19:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:38.470 19:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:38.470 19:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:38.470 19:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.470 19:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:38.728 19:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.728 19:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:38.728 19:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.728 19:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:38.986 19:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:38.986 19:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:38.986 19:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.986 19:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:39.244 19:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.244 19:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:39.244 19:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.244 19:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:39.502 19:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.502 19:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:39.502 19:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.502 19:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:39.759 19:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.759 19:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:39.759 19:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.759 19:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:40.017 19:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.017 19:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:40.017 19:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:40.275 19:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:40.534 19:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:41.466 19:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:41.466 19:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:41.466 19:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.466 19:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:41.723 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.723 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:41.723 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.723 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:41.982 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:41.982 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:41.982 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.982 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:42.240 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.240 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:42.240 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.240 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:42.498 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.498 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:42.498 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.498 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:42.756 19:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.756 19:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:42.756 19:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.756 19:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:43.014 19:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:43.014 19:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:43.014 19:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:43.272 19:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:43.529 19:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:44.462 19:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:44.462 19:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:44.462 19:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.462 19:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:44.720 19:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:44.720 19:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:44.720 19:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.720 19:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:44.978 19:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:44.978 19:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:44.978 19:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.978 19:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:45.235 19:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.235 19:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:45.235 19:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.235 19:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:45.493 19:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.493 19:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:45.493 19:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.493 19:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:45.750 19:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:45.750 19:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:45.750 19:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.750 19:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:46.008 19:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:46.008 19:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:46.008 19:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:46.265 19:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:46.522 19:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:47.454 19:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:47.454 19:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:47.454 19:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.454 19:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:47.712 19:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:47.712 19:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:47.712 19:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.712 19:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:47.970 19:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.970 19:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:47.970 19:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.970 19:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:48.229 19:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.229 19:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:48.229 19:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.229 19:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:48.487 19:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.487 19:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:48.487 19:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.487 19:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:48.745 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:48.745 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:48.745 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.745 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:49.004 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.004 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:49.262 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:49.262 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:49.520 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:49.778 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:50.711 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:50.711 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:50.711 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.711 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:50.969 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.969 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:50.969 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.969 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:51.226 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.226 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:51.226 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.226 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:51.484 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.484 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:51.484 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.484 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:51.741 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.741 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:51.741 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.741 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:51.999 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.999 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:51.999 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.999 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:52.257 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.257 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:52.257 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:52.515 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:52.773 19:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:53.705 19:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:53.705 19:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:53.705 19:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.705 19:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:53.963 19:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:53.963 19:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:53.963 19:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.963 19:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:54.221 19:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.221 19:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:54.221 19:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.221 19:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:54.480 19:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.480 19:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:54.480 19:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.480 19:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:54.737 19:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.737 19:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:54.737 19:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.737 19:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:55.000 19:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.000 19:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:55.000 19:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.000 19:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:55.295 19:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.295 19:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:55.295 19:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:55.552 19:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:55.810 19:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:56.743 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:56.743 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:56.743 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.743 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:57.001 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.001 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:57.001 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.001 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:57.258 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.258 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:57.258 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.258 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:57.516 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.516 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:57.516 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.516 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:57.774 19:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.774 19:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:57.774 19:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.774 19:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:58.031 19:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.031 19:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:58.031 19:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.031 19:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:58.289 19:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.289 19:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:58.289 19:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:58.546 19:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:58.803 19:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:59.799 19:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:59.799 19:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:59.799 19:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.799 19:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:00.056 19:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.056 19:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:00.056 19:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.056 19:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:00.314 19:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:00.314 19:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:00.314 19:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:00.314 19:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.572 19:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.572 19:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:00.572 19:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.572 19:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:00.829 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.830 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:00.830 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.830 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:01.086 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.086 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:01.086 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.086 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:01.343 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:01.343 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 982511 00:25:01.343 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 982511 ']' 00:25:01.343 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 982511 00:25:01.343 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:01.343 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:01.343 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 982511 00:25:01.343 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:01.343 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:01.343 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 982511' 00:25:01.343 killing process with pid 982511 00:25:01.343 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 982511 00:25:01.343 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 982511 00:25:01.343 Connection closed with partial response: 00:25:01.343 00:25:01.343 00:25:01.614 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 982511 00:25:01.614 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:01.614 [2024-07-25 19:15:19.682432] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:01.614 [2024-07-25 19:15:19.682513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid982511 ] 00:25:01.614 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.614 [2024-07-25 19:15:19.749874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.614 [2024-07-25 19:15:19.857974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:01.614 Running I/O for 90 seconds... 00:25:01.614 [2024-07-25 19:15:35.569880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.614 [2024-07-25 19:15:35.569943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.570006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.614 [2024-07-25 19:15:35.570027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.570052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.614 [2024-07-25 19:15:35.570070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.570093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.614 [2024-07-25 19:15:35.570119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.570143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.614 [2024-07-25 19:15:35.570161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.570184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.614 [2024-07-25 19:15:35.570200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.570223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.614 [2024-07-25 19:15:35.570239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.570262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.614 [2024-07-25 19:15:35.570279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.570301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.614 [2024-07-25 19:15:35.570318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.570340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.614 [2024-07-25 19:15:35.570356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.570393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.614 [2024-07-25 19:15:35.570418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.570457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.614 [2024-07-25 19:15:35.570474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.570497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.614 [2024-07-25 19:15:35.570513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.570536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.614 [2024-07-25 19:15:35.570552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.570574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.614 [2024-07-25 19:15:35.570590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.570613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.614 [2024-07-25 19:15:35.570629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.570651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.614 [2024-07-25 19:15:35.570667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.570700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.614 [2024-07-25 19:15:35.570716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.570738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.614 [2024-07-25 19:15:35.570754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.570776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.614 [2024-07-25 19:15:35.570792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.570814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.614 [2024-07-25 19:15:35.570831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.570853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.614 [2024-07-25 19:15:35.570869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.570891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.614 [2024-07-25 19:15:35.570912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.570935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.614 [2024-07-25 19:15:35.570951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.570974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.614 [2024-07-25 19:15:35.570990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.571012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.614 [2024-07-25 19:15:35.571029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.571808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.614 [2024-07-25 19:15:35.571831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:01.614 [2024-07-25 19:15:35.571856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.615 [2024-07-25 19:15:35.571872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.571896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.615 [2024-07-25 19:15:35.571912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.571935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.615 [2024-07-25 19:15:35.571951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.571974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.615 [2024-07-25 19:15:35.571990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.572014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.615 [2024-07-25 19:15:35.572030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.572487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.615 [2024-07-25 19:15:35.572513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.572545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.572564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.572591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.572608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.572640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.572657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.572684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.572701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.572727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.572744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.572770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.572787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.572814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.572830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.572857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.572874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.572900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.572917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.572944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.572960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.572986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.573018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.573045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.573062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.573113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.573132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.573160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.573177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.573208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.573226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.573253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.573269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.573296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.573313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.573339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.573356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.573398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.573416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.573442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.573458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.573484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.573500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.573526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.573543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.573568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.573584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.573610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.573626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.573668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.573685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.573712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.573728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.573755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.573776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.573803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.573820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.573847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.573863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.573890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.573907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.615 [2024-07-25 19:15:35.573934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-25 19:15:35.573950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.573977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.573994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.574021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.574038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.574224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.574248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.574282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.574300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.574330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.574347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.574376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.574393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.574422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.574439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.574468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.574490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.574520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.574538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.574583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.574600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.574628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.574645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.574673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.574690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.574718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.574734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.574762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.574779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.574807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.574823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.574852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.574868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.574896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.574913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.574941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.574957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.574985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.575002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.575030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.575046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.575079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.575120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.575152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.575170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.575199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.575216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.575245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.575262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.575292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.575309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.575338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.575355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.575400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.575417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.575446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.575462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.575490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.575507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.575535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.575551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.575580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.575596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.575625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.575641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.575674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.575691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.575719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.575736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.575765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.575782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.575810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.575827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.575855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.575872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.575900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.616 [2024-07-25 19:15:35.575916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:01.616 [2024-07-25 19:15:35.575945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.617 [2024-07-25 19:15:35.575962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.120489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.617 [2024-07-25 19:15:51.120570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.120622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.617 [2024-07-25 19:15:51.120641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.120680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.617 [2024-07-25 19:15:51.120698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.120720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.617 [2024-07-25 19:15:51.120753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.120777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.617 [2024-07-25 19:15:51.120794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.120816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.617 [2024-07-25 19:15:51.120843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.120866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.617 [2024-07-25 19:15:51.120882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.120905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.617 [2024-07-25 19:15:51.120922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.120944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.617 [2024-07-25 19:15:51.120960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.120982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.617 [2024-07-25 19:15:51.120998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.121020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.617 [2024-07-25 19:15:51.121036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.121058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.617 [2024-07-25 19:15:51.121074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.121097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.617 [2024-07-25 19:15:51.121122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.121146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.617 [2024-07-25 19:15:51.121163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.121185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.617 [2024-07-25 19:15:51.121201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.121223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.617 [2024-07-25 19:15:51.121239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.121261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.617 [2024-07-25 19:15:51.121277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.121299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.617 [2024-07-25 19:15:51.121320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.121359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.617 [2024-07-25 19:15:51.121375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.121396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.617 [2024-07-25 19:15:51.121412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.121434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.617 [2024-07-25 19:15:51.121449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.121470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.617 [2024-07-25 19:15:51.121485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.121522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.617 [2024-07-25 19:15:51.121539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.121562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.617 [2024-07-25 19:15:51.121578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.122953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.617 [2024-07-25 19:15:51.122980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.123009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.617 [2024-07-25 19:15:51.123027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.123050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.617 [2024-07-25 19:15:51.123066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.123088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.617 [2024-07-25 19:15:51.123114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.123153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.617 [2024-07-25 19:15:51.123170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.123192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.617 [2024-07-25 19:15:51.123208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.123235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.617 [2024-07-25 19:15:51.123251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.123273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.617 [2024-07-25 19:15:51.123288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.123310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.617 [2024-07-25 19:15:51.123327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.123348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.617 [2024-07-25 19:15:51.123364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.123385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.617 [2024-07-25 19:15:51.123401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.123422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.617 [2024-07-25 19:15:51.123438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:01.617 [2024-07-25 19:15:51.123459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.618 [2024-07-25 19:15:51.123474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:01.618 [2024-07-25 19:15:51.123496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.618 [2024-07-25 19:15:51.123511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:01.618 [2024-07-25 19:15:51.123532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.618 [2024-07-25 19:15:51.123564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:01.618 [2024-07-25 19:15:51.123587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.618 [2024-07-25 19:15:51.123604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.618 [2024-07-25 19:15:51.123626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.618 [2024-07-25 19:15:51.123642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:01.618 [2024-07-25 19:15:51.123664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.618 [2024-07-25 19:15:51.123680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:01.618 [2024-07-25 19:15:51.123711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.618 [2024-07-25 19:15:51.123728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:01.618 [2024-07-25 19:15:51.123751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.618 [2024-07-25 19:15:51.123767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.618 [2024-07-25 19:15:51.123789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.618 [2024-07-25 19:15:51.123805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.618 [2024-07-25 19:15:51.123827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.618 [2024-07-25 19:15:51.123843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.618 [2024-07-25 19:15:51.123865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.618 [2024-07-25 19:15:51.123881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:01.618 [2024-07-25 19:15:51.123903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.618 [2024-07-25 19:15:51.123918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:01.618 [2024-07-25 19:15:51.123940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.618 [2024-07-25 19:15:51.123956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:01.618 [2024-07-25 19:15:51.123978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.618 [2024-07-25 19:15:51.123994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:01.618 [2024-07-25 19:15:51.124016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.618 [2024-07-25 19:15:51.124032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:01.618 [2024-07-25 19:15:51.124054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.618 [2024-07-25 19:15:51.124070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:01.618 [2024-07-25 19:15:51.124091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.618 [2024-07-25 19:15:51.124115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:01.618 [2024-07-25 19:15:51.124138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.618 [2024-07-25 19:15:51.124155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:01.618 [2024-07-25 19:15:51.124177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.618 [2024-07-25 19:15:51.124197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:01.618 [2024-07-25 19:15:51.124220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.618 [2024-07-25 19:15:51.124237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:01.618 [2024-07-25 19:15:51.124259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.618 [2024-07-25 19:15:51.124276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:01.618 [2024-07-25 19:15:51.124297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.618 [2024-07-25 19:15:51.124313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:01.618 [2024-07-25 19:15:51.124335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.618 [2024-07-25 19:15:51.124351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.618 [2024-07-25 19:15:51.124373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.618 [2024-07-25 19:15:51.124389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:01.618 [2024-07-25 19:15:51.124412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.618 [2024-07-25 19:15:51.124428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:01.618 [2024-07-25 19:15:51.125009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.618 [2024-07-25 19:15:51.125032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:01.618 [2024-07-25 19:15:51.125059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.618 [2024-07-25 19:15:51.125077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.125107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.619 [2024-07-25 19:15:51.125126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.125149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.619 [2024-07-25 19:15:51.125165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.125188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.619 [2024-07-25 19:15:51.125204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.125226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.619 [2024-07-25 19:15:51.125247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.125270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.619 [2024-07-25 19:15:51.125286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.125308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.619 [2024-07-25 19:15:51.125324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.125346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.619 [2024-07-25 19:15:51.125363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.125384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.619 [2024-07-25 19:15:51.125415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.125438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.619 [2024-07-25 19:15:51.125454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.125475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.619 [2024-07-25 19:15:51.125491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.125512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.619 [2024-07-25 19:15:51.125528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.125549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.619 [2024-07-25 19:15:51.125565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.125586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.619 [2024-07-25 19:15:51.125601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.125622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.619 [2024-07-25 19:15:51.125637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.125677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.619 [2024-07-25 19:15:51.125693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.125715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.619 [2024-07-25 19:15:51.125731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.125757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.619 [2024-07-25 19:15:51.125774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.125796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.619 [2024-07-25 19:15:51.125812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.125834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.619 [2024-07-25 19:15:51.125850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.125871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.619 [2024-07-25 19:15:51.125887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.125909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.619 [2024-07-25 19:15:51.125924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.125946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.619 [2024-07-25 19:15:51.125962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.125984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.619 [2024-07-25 19:15:51.126000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.126022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.619 [2024-07-25 19:15:51.126038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.126059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.619 [2024-07-25 19:15:51.126075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.126097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.619 [2024-07-25 19:15:51.126120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.126144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.619 [2024-07-25 19:15:51.126160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.126183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.619 [2024-07-25 19:15:51.126200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.127443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.619 [2024-07-25 19:15:51.127468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.127496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.619 [2024-07-25 19:15:51.127514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.127537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.619 [2024-07-25 19:15:51.127553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.127575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.619 [2024-07-25 19:15:51.127591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.127613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.619 [2024-07-25 19:15:51.127629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.127650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.619 [2024-07-25 19:15:51.127667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:01.619 [2024-07-25 19:15:51.127689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.619 [2024-07-25 19:15:51.127705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.127727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.620 [2024-07-25 19:15:51.127742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.127764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.620 [2024-07-25 19:15:51.127780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.127801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.620 [2024-07-25 19:15:51.127817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.127839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.620 [2024-07-25 19:15:51.127855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.127876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.620 [2024-07-25 19:15:51.127892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.127914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.620 [2024-07-25 19:15:51.127935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.127958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.620 [2024-07-25 19:15:51.127975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.127997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.620 [2024-07-25 19:15:51.128013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.128035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.620 [2024-07-25 19:15:51.128050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.128072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.620 [2024-07-25 19:15:51.128088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.128117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.620 [2024-07-25 19:15:51.128135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.128157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.620 [2024-07-25 19:15:51.128173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.128194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.620 [2024-07-25 19:15:51.128210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.128232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.620 [2024-07-25 19:15:51.128248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.128286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.620 [2024-07-25 19:15:51.128302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.128323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.620 [2024-07-25 19:15:51.128338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.128359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.620 [2024-07-25 19:15:51.128374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.128395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.620 [2024-07-25 19:15:51.128414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.128436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.620 [2024-07-25 19:15:51.128451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.128472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.620 [2024-07-25 19:15:51.128487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.128508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.620 [2024-07-25 19:15:51.128524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.128544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.620 [2024-07-25 19:15:51.128560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.128580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.620 [2024-07-25 19:15:51.128595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.128616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.620 [2024-07-25 19:15:51.128632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.128652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.620 [2024-07-25 19:15:51.128667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.128688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.620 [2024-07-25 19:15:51.128703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.128724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.620 [2024-07-25 19:15:51.128739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.128760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.620 [2024-07-25 19:15:51.128775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.128796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.620 [2024-07-25 19:15:51.128811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.128832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.620 [2024-07-25 19:15:51.128847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.128872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.620 [2024-07-25 19:15:51.128888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.128910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.620 [2024-07-25 19:15:51.128925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.130981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.620 [2024-07-25 19:15:51.131007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.131034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.620 [2024-07-25 19:15:51.131052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:01.620 [2024-07-25 19:15:51.131075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.621 [2024-07-25 19:15:51.131091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.131122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.621 [2024-07-25 19:15:51.131140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.131162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.621 [2024-07-25 19:15:51.131178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.131200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.621 [2024-07-25 19:15:51.131216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.131238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.621 [2024-07-25 19:15:51.131254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.131276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.621 [2024-07-25 19:15:51.131292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.131314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.621 [2024-07-25 19:15:51.131330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.131352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.621 [2024-07-25 19:15:51.131368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.131395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.621 [2024-07-25 19:15:51.131411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.131434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.621 [2024-07-25 19:15:51.131450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.131471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.621 [2024-07-25 19:15:51.131487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.131509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.621 [2024-07-25 19:15:51.131525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.131546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.621 [2024-07-25 19:15:51.131563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.131585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.621 [2024-07-25 19:15:51.131600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.131622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.621 [2024-07-25 19:15:51.131638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.131660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.621 [2024-07-25 19:15:51.131676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.131698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.621 [2024-07-25 19:15:51.131714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.131735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.621 [2024-07-25 19:15:51.131751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.131773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.621 [2024-07-25 19:15:51.131789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.131810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.621 [2024-07-25 19:15:51.131826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.131848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.621 [2024-07-25 19:15:51.131868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.131891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.621 [2024-07-25 19:15:51.131907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.131929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.621 [2024-07-25 19:15:51.131945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.131967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.621 [2024-07-25 19:15:51.131982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.132005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.621 [2024-07-25 19:15:51.132021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.133177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.621 [2024-07-25 19:15:51.133205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.133233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.621 [2024-07-25 19:15:51.133250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.133272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.621 [2024-07-25 19:15:51.133288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.133310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.621 [2024-07-25 19:15:51.133327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.133348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.621 [2024-07-25 19:15:51.133364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.133386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.621 [2024-07-25 19:15:51.133402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.133423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.621 [2024-07-25 19:15:51.133439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.133461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.621 [2024-07-25 19:15:51.133491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.133514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.621 [2024-07-25 19:15:51.133531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:01.621 [2024-07-25 19:15:51.133552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.621 [2024-07-25 19:15:51.133568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.133590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.622 [2024-07-25 19:15:51.133605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.133627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.622 [2024-07-25 19:15:51.133643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.133664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.622 [2024-07-25 19:15:51.133680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.133702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.622 [2024-07-25 19:15:51.133718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.133739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.622 [2024-07-25 19:15:51.133755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.133777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.622 [2024-07-25 19:15:51.133793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.133815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.622 [2024-07-25 19:15:51.133831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.133853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.622 [2024-07-25 19:15:51.133869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.134864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.622 [2024-07-25 19:15:51.134889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.134917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.622 [2024-07-25 19:15:51.134940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.134963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.622 [2024-07-25 19:15:51.134980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.135002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.622 [2024-07-25 19:15:51.135019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.135040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.622 [2024-07-25 19:15:51.135057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.135079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.622 [2024-07-25 19:15:51.135095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.135124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.622 [2024-07-25 19:15:51.135141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.135163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.622 [2024-07-25 19:15:51.135179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.135201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.622 [2024-07-25 19:15:51.135217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.135239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.622 [2024-07-25 19:15:51.135255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.135277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.622 [2024-07-25 19:15:51.135293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.135315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.622 [2024-07-25 19:15:51.135330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.135352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.622 [2024-07-25 19:15:51.135368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.135389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.622 [2024-07-25 19:15:51.135405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.135431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.622 [2024-07-25 19:15:51.135448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.135470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.622 [2024-07-25 19:15:51.135486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.135885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.622 [2024-07-25 19:15:51.135908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.135934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.622 [2024-07-25 19:15:51.135952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.135974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.622 [2024-07-25 19:15:51.135990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.136013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.622 [2024-07-25 19:15:51.136029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.136050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.622 [2024-07-25 19:15:51.136066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.136088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.622 [2024-07-25 19:15:51.136111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.136136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.622 [2024-07-25 19:15:51.136168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.136190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.622 [2024-07-25 19:15:51.136206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.136228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.622 [2024-07-25 19:15:51.136243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.136263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.622 [2024-07-25 19:15:51.136278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:01.622 [2024-07-25 19:15:51.136304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.622 [2024-07-25 19:15:51.136321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.136341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.623 [2024-07-25 19:15:51.136357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.136377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.623 [2024-07-25 19:15:51.136393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.136414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.623 [2024-07-25 19:15:51.136429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.136450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.623 [2024-07-25 19:15:51.136465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.136485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.623 [2024-07-25 19:15:51.136516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.136539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.623 [2024-07-25 19:15:51.136556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.136577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.623 [2024-07-25 19:15:51.136593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.136615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.623 [2024-07-25 19:15:51.136631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.136653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.623 [2024-07-25 19:15:51.136669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.138010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.623 [2024-07-25 19:15:51.138034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.138062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.623 [2024-07-25 19:15:51.138079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.138111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.623 [2024-07-25 19:15:51.138134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.138158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.623 [2024-07-25 19:15:51.138174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.138196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.623 [2024-07-25 19:15:51.138212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.138234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.623 [2024-07-25 19:15:51.138250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.138272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.623 [2024-07-25 19:15:51.138288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.138310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.623 [2024-07-25 19:15:51.138326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.138348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.623 [2024-07-25 19:15:51.138364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.138400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.623 [2024-07-25 19:15:51.138416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.138437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.623 [2024-07-25 19:15:51.138452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.138472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.623 [2024-07-25 19:15:51.138487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.138507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.623 [2024-07-25 19:15:51.138522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.138542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.623 [2024-07-25 19:15:51.138557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.138577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.623 [2024-07-25 19:15:51.138598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.138620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.623 [2024-07-25 19:15:51.138635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.138656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.623 [2024-07-25 19:15:51.138671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.138691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.623 [2024-07-25 19:15:51.138706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.138726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.623 [2024-07-25 19:15:51.138741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.138761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.623 [2024-07-25 19:15:51.138776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:01.623 [2024-07-25 19:15:51.138797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.623 [2024-07-25 19:15:51.138814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.138836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.624 [2024-07-25 19:15:51.138851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.138872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.624 [2024-07-25 19:15:51.138887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.138908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.624 [2024-07-25 19:15:51.138923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.138943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.624 [2024-07-25 19:15:51.138958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.138978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.624 [2024-07-25 19:15:51.138993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.139013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.624 [2024-07-25 19:15:51.139028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.139052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.624 [2024-07-25 19:15:51.139069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.139113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.624 [2024-07-25 19:15:51.139132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.140699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.624 [2024-07-25 19:15:51.140724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.140752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.624 [2024-07-25 19:15:51.140769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.140792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.624 [2024-07-25 19:15:51.140808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.140829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.624 [2024-07-25 19:15:51.140846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.140867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.624 [2024-07-25 19:15:51.140883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.140905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.624 [2024-07-25 19:15:51.140921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.140943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.624 [2024-07-25 19:15:51.140959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.140980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.624 [2024-07-25 19:15:51.140996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.141018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.624 [2024-07-25 19:15:51.141034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.141055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.624 [2024-07-25 19:15:51.141071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.141099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.624 [2024-07-25 19:15:51.141125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.141886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.624 [2024-07-25 19:15:51.141910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.141951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.624 [2024-07-25 19:15:51.141969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.141992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.624 [2024-07-25 19:15:51.142008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.142031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.624 [2024-07-25 19:15:51.142047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.142069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.624 [2024-07-25 19:15:51.142085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.142114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.624 [2024-07-25 19:15:51.142133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.142155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.624 [2024-07-25 19:15:51.142172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.142194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.624 [2024-07-25 19:15:51.142210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.142232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.624 [2024-07-25 19:15:51.142248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.142270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.624 [2024-07-25 19:15:51.142285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.142307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.624 [2024-07-25 19:15:51.142323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.142345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.624 [2024-07-25 19:15:51.142366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.142389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.624 [2024-07-25 19:15:51.142405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.142427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.624 [2024-07-25 19:15:51.142443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.142465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.624 [2024-07-25 19:15:51.142481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.142503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.624 [2024-07-25 19:15:51.142519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.142541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.624 [2024-07-25 19:15:51.142557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.142579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.624 [2024-07-25 19:15:51.142594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.624 [2024-07-25 19:15:51.142616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.625 [2024-07-25 19:15:51.142647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.142669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.625 [2024-07-25 19:15:51.142683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.142719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.625 [2024-07-25 19:15:51.142735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.142756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.625 [2024-07-25 19:15:51.142788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.142811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.625 [2024-07-25 19:15:51.142827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.142849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.625 [2024-07-25 19:15:51.142869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.142892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.625 [2024-07-25 19:15:51.142908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.142930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.625 [2024-07-25 19:15:51.142946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.142968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.625 [2024-07-25 19:15:51.142984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.143006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.625 [2024-07-25 19:15:51.143022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.143044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.625 [2024-07-25 19:15:51.143060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.143082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.625 [2024-07-25 19:15:51.143098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.143748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.625 [2024-07-25 19:15:51.143771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.143798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.625 [2024-07-25 19:15:51.143816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.143839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.625 [2024-07-25 19:15:51.143855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.143877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.625 [2024-07-25 19:15:51.143893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.143915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.625 [2024-07-25 19:15:51.143931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.143971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.625 [2024-07-25 19:15:51.143987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.144014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.625 [2024-07-25 19:15:51.144046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.144068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.625 [2024-07-25 19:15:51.144083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.145625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.625 [2024-07-25 19:15:51.145649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.145677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.625 [2024-07-25 19:15:51.145695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.145717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.625 [2024-07-25 19:15:51.145733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.145754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.625 [2024-07-25 19:15:51.145771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.145793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.625 [2024-07-25 19:15:51.145809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.145831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.625 [2024-07-25 19:15:51.145847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.145869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.625 [2024-07-25 19:15:51.145885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.145907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.625 [2024-07-25 19:15:51.145922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.145944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.625 [2024-07-25 19:15:51.145959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.145981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.625 [2024-07-25 19:15:51.145997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.146024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.625 [2024-07-25 19:15:51.146041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.146062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.625 [2024-07-25 19:15:51.146078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.146100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.625 [2024-07-25 19:15:51.146125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.146148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.625 [2024-07-25 19:15:51.146165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:01.625 [2024-07-25 19:15:51.146187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.625 [2024-07-25 19:15:51.146203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.146225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.626 [2024-07-25 19:15:51.146240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.146262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.626 [2024-07-25 19:15:51.146278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.146300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.626 [2024-07-25 19:15:51.146315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.146337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.626 [2024-07-25 19:15:51.146353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.146375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.626 [2024-07-25 19:15:51.146390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.146411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.626 [2024-07-25 19:15:51.146427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.146449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.626 [2024-07-25 19:15:51.146465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.146487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.626 [2024-07-25 19:15:51.146507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.146529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.626 [2024-07-25 19:15:51.146546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.146567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.626 [2024-07-25 19:15:51.146583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.146605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.626 [2024-07-25 19:15:51.146621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.146643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.626 [2024-07-25 19:15:51.146658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.146680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.626 [2024-07-25 19:15:51.146696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.146718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.626 [2024-07-25 19:15:51.146734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.146756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.626 [2024-07-25 19:15:51.146773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.147769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.626 [2024-07-25 19:15:51.147792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.147818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.626 [2024-07-25 19:15:51.147835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.147855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.626 [2024-07-25 19:15:51.147871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.147892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.626 [2024-07-25 19:15:51.147907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.147944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.626 [2024-07-25 19:15:51.147966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.148006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.626 [2024-07-25 19:15:51.148022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.148044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.626 [2024-07-25 19:15:51.148060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.148081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.626 [2024-07-25 19:15:51.148097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.148130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.626 [2024-07-25 19:15:51.148148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.148169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.626 [2024-07-25 19:15:51.148186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.148207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.626 [2024-07-25 19:15:51.148224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.148245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.626 [2024-07-25 19:15:51.148261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.148282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.626 [2024-07-25 19:15:51.148298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.148320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.626 [2024-07-25 19:15:51.148336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.148358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.626 [2024-07-25 19:15:51.148375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.149586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.626 [2024-07-25 19:15:51.149611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.149638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.626 [2024-07-25 19:15:51.149660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.149684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.626 [2024-07-25 19:15:51.149700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.149722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.626 [2024-07-25 19:15:51.149738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.149760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.626 [2024-07-25 19:15:51.149776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.149797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.626 [2024-07-25 19:15:51.149814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.149835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.626 [2024-07-25 19:15:51.149851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:01.626 [2024-07-25 19:15:51.149873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.627 [2024-07-25 19:15:51.149889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.149911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.627 [2024-07-25 19:15:51.149927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.149948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.627 [2024-07-25 19:15:51.149964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.149986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.627 [2024-07-25 19:15:51.150001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.150023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.627 [2024-07-25 19:15:51.150039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.150061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.627 [2024-07-25 19:15:51.150077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.150098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.627 [2024-07-25 19:15:51.150123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.150151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.627 [2024-07-25 19:15:51.150168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.150189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.627 [2024-07-25 19:15:51.150205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.150227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.627 [2024-07-25 19:15:51.150243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.150265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.627 [2024-07-25 19:15:51.150280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.150302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.627 [2024-07-25 19:15:51.150318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.150340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.627 [2024-07-25 19:15:51.150356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.150393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.627 [2024-07-25 19:15:51.150409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.150431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.627 [2024-07-25 19:15:51.150462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.150484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.627 [2024-07-25 19:15:51.150499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.150519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.627 [2024-07-25 19:15:51.150534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.150555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.627 [2024-07-25 19:15:51.150570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.150590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.627 [2024-07-25 19:15:51.150605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.150630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.627 [2024-07-25 19:15:51.150646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.150667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.627 [2024-07-25 19:15:51.150682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.150702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.627 [2024-07-25 19:15:51.150717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.150738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.627 [2024-07-25 19:15:51.150753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.150773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.627 [2024-07-25 19:15:51.150788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.150809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.627 [2024-07-25 19:15:51.150824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.150845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.627 [2024-07-25 19:15:51.150860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.152931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.627 [2024-07-25 19:15:51.152958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.152986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.627 [2024-07-25 19:15:51.153004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.153026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.627 [2024-07-25 19:15:51.153043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.153065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.627 [2024-07-25 19:15:51.153081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.153110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.627 [2024-07-25 19:15:51.153128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.153159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.627 [2024-07-25 19:15:51.153180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.153203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.627 [2024-07-25 19:15:51.153220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.153241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.627 [2024-07-25 19:15:51.153257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.153279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.627 [2024-07-25 19:15:51.153295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:01.627 [2024-07-25 19:15:51.153317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.628 [2024-07-25 19:15:51.153333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.153355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.628 [2024-07-25 19:15:51.153371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.153393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.628 [2024-07-25 19:15:51.153409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.153431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.628 [2024-07-25 19:15:51.153447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.153468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.628 [2024-07-25 19:15:51.153484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.153506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.628 [2024-07-25 19:15:51.153522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.153544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.628 [2024-07-25 19:15:51.153560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.153581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.628 [2024-07-25 19:15:51.153597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.153619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.628 [2024-07-25 19:15:51.153639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.153662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.628 [2024-07-25 19:15:51.153678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.153701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.628 [2024-07-25 19:15:51.153717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.153739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.628 [2024-07-25 19:15:51.153755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.153777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.628 [2024-07-25 19:15:51.153794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.153815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.628 [2024-07-25 19:15:51.153832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.153853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.628 [2024-07-25 19:15:51.153869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.153906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.628 [2024-07-25 19:15:51.153922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.153944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.628 [2024-07-25 19:15:51.153960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.153980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.628 [2024-07-25 19:15:51.153996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.154035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.628 [2024-07-25 19:15:51.154051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.155163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.628 [2024-07-25 19:15:51.155188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.155216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.628 [2024-07-25 19:15:51.155234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.155263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.628 [2024-07-25 19:15:51.155280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.155302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.628 [2024-07-25 19:15:51.155318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.155340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.628 [2024-07-25 19:15:51.155357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.155379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.628 [2024-07-25 19:15:51.155395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.155417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.628 [2024-07-25 19:15:51.155433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.155471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.628 [2024-07-25 19:15:51.155487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.155508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.628 [2024-07-25 19:15:51.155539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.155562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.628 [2024-07-25 19:15:51.155578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.155601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.628 [2024-07-25 19:15:51.155616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.155638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.628 [2024-07-25 19:15:51.155654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:01.628 [2024-07-25 19:15:51.155676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.629 [2024-07-25 19:15:51.155692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.155714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.629 [2024-07-25 19:15:51.155730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.155759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.629 [2024-07-25 19:15:51.155776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.155798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.629 [2024-07-25 19:15:51.155814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.155836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.629 [2024-07-25 19:15:51.155852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.155874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.629 [2024-07-25 19:15:51.155890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.155912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.629 [2024-07-25 19:15:51.155928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.155950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.629 [2024-07-25 19:15:51.155967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.156790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.629 [2024-07-25 19:15:51.156827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.156853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.629 [2024-07-25 19:15:51.156883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.156905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.629 [2024-07-25 19:15:51.156920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.156941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.629 [2024-07-25 19:15:51.156956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.156977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.629 [2024-07-25 19:15:51.156991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.157028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.629 [2024-07-25 19:15:51.157044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.157066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.629 [2024-07-25 19:15:51.157109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.157136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.629 [2024-07-25 19:15:51.157153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.157175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.629 [2024-07-25 19:15:51.157191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.157213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.629 [2024-07-25 19:15:51.157229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.157251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.629 [2024-07-25 19:15:51.157267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.157289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.629 [2024-07-25 19:15:51.157305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.157326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.629 [2024-07-25 19:15:51.157342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.157364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.629 [2024-07-25 19:15:51.157380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.157401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.629 [2024-07-25 19:15:51.157418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.157440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.629 [2024-07-25 19:15:51.157456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.157988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.629 [2024-07-25 19:15:51.158012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.158039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.629 [2024-07-25 19:15:51.158057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.158079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.629 [2024-07-25 19:15:51.158108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.158134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.629 [2024-07-25 19:15:51.158151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.158172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.629 [2024-07-25 19:15:51.158189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.158211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.629 [2024-07-25 19:15:51.158227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.158248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.629 [2024-07-25 19:15:51.158264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.158286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.629 [2024-07-25 19:15:51.158302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.158323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.629 [2024-07-25 19:15:51.158339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.158361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.629 [2024-07-25 19:15:51.158377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.158413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.629 [2024-07-25 19:15:51.158429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.158449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.629 [2024-07-25 19:15:51.158480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.158502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.629 [2024-07-25 19:15:51.158518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.158540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.629 [2024-07-25 19:15:51.158555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:01.629 [2024-07-25 19:15:51.158577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.629 [2024-07-25 19:15:51.158593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.158619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.630 [2024-07-25 19:15:51.158636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.158658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.630 [2024-07-25 19:15:51.158674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.158695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.630 [2024-07-25 19:15:51.158711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.158733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.630 [2024-07-25 19:15:51.158749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.158771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.630 [2024-07-25 19:15:51.158787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.160247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.630 [2024-07-25 19:15:51.160272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.160300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.630 [2024-07-25 19:15:51.160317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.160340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.630 [2024-07-25 19:15:51.160356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.160378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.630 [2024-07-25 19:15:51.160395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.160417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.630 [2024-07-25 19:15:51.160433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.160454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.630 [2024-07-25 19:15:51.160470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.160492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.630 [2024-07-25 19:15:51.160508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.160535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.630 [2024-07-25 19:15:51.160552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.160574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.630 [2024-07-25 19:15:51.160590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.160612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.630 [2024-07-25 19:15:51.160628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.160649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.630 [2024-07-25 19:15:51.160665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.160687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.630 [2024-07-25 19:15:51.160703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.160741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.630 [2024-07-25 19:15:51.160757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.160778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.630 [2024-07-25 19:15:51.160809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.160830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.630 [2024-07-25 19:15:51.160845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.160866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.630 [2024-07-25 19:15:51.160880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.160901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.630 [2024-07-25 19:15:51.160916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.160937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.630 [2024-07-25 19:15:51.160952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.160973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.630 [2024-07-25 19:15:51.160988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.161008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.630 [2024-07-25 19:15:51.161027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.161048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.630 [2024-07-25 19:15:51.161064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.161100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.630 [2024-07-25 19:15:51.161126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.161150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.630 [2024-07-25 19:15:51.161166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.161188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.630 [2024-07-25 19:15:51.161204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.161226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.630 [2024-07-25 19:15:51.161242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.161264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.630 [2024-07-25 19:15:51.161280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.161302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.630 [2024-07-25 19:15:51.161318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.161340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.630 [2024-07-25 19:15:51.161356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.161378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.630 [2024-07-25 19:15:51.161409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.162802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.630 [2024-07-25 19:15:51.162827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.162855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.630 [2024-07-25 19:15:51.162873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.162895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.630 [2024-07-25 19:15:51.162916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.162939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.630 [2024-07-25 19:15:51.162955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:01.630 [2024-07-25 19:15:51.162977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.631 [2024-07-25 19:15:51.162993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.163015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.631 [2024-07-25 19:15:51.163031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.163053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.631 [2024-07-25 19:15:51.163069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.163091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.631 [2024-07-25 19:15:51.163115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.163139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.631 [2024-07-25 19:15:51.163155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.163177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.631 [2024-07-25 19:15:51.163193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.163214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.631 [2024-07-25 19:15:51.163230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.163252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.631 [2024-07-25 19:15:51.163268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.164151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.631 [2024-07-25 19:15:51.164176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.164204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.631 [2024-07-25 19:15:51.164221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.164244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.631 [2024-07-25 19:15:51.164260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.164287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.631 [2024-07-25 19:15:51.164304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.164326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.631 [2024-07-25 19:15:51.164342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.164364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.631 [2024-07-25 19:15:51.164380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.164402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.631 [2024-07-25 19:15:51.164418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.164440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.631 [2024-07-25 19:15:51.164471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.164493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.631 [2024-07-25 19:15:51.164509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.164545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.631 [2024-07-25 19:15:51.164560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.164580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.631 [2024-07-25 19:15:51.164611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.164634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.631 [2024-07-25 19:15:51.164650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.164672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.631 [2024-07-25 19:15:51.164688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.164709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.631 [2024-07-25 19:15:51.164725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.164747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.631 [2024-07-25 19:15:51.164763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.164790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.631 [2024-07-25 19:15:51.164806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.164828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.631 [2024-07-25 19:15:51.164844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.164865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.631 [2024-07-25 19:15:51.164881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.164902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.631 [2024-07-25 19:15:51.164918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.164940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.631 [2024-07-25 19:15:51.164956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.164978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.631 [2024-07-25 19:15:51.164993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.165015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.631 [2024-07-25 19:15:51.165031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.165052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.631 [2024-07-25 19:15:51.165069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.165090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.631 [2024-07-25 19:15:51.165113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.165137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.631 [2024-07-25 19:15:51.165154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.165176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.631 [2024-07-25 19:15:51.165193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.166133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.631 [2024-07-25 19:15:51.166157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.166189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.631 [2024-07-25 19:15:51.166208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.166231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.631 [2024-07-25 19:15:51.166247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.166269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.631 [2024-07-25 19:15:51.166285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:01.631 [2024-07-25 19:15:51.166306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.631 [2024-07-25 19:15:51.166323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.166344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.632 [2024-07-25 19:15:51.166360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.166382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.632 [2024-07-25 19:15:51.166398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.166420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.632 [2024-07-25 19:15:51.166436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.166458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.632 [2024-07-25 19:15:51.166474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.167211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.632 [2024-07-25 19:15:51.167236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.167263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.632 [2024-07-25 19:15:51.167281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.167303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.632 [2024-07-25 19:15:51.167320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.167341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.632 [2024-07-25 19:15:51.167358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.167380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.632 [2024-07-25 19:15:51.167400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.167423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.632 [2024-07-25 19:15:51.167440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.167462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.632 [2024-07-25 19:15:51.167492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.167515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.632 [2024-07-25 19:15:51.167530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.167551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.632 [2024-07-25 19:15:51.167566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.167603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.632 [2024-07-25 19:15:51.167618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.167639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.632 [2024-07-25 19:15:51.167654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.167691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.632 [2024-07-25 19:15:51.167707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.167729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.632 [2024-07-25 19:15:51.167745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.167767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.632 [2024-07-25 19:15:51.167783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.167805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.632 [2024-07-25 19:15:51.167821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.167842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.632 [2024-07-25 19:15:51.167858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.167880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.632 [2024-07-25 19:15:51.167900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.167923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.632 [2024-07-25 19:15:51.167939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.167961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.632 [2024-07-25 19:15:51.167977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.167999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.632 [2024-07-25 19:15:51.168014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.168036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.632 [2024-07-25 19:15:51.168052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.168075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.632 [2024-07-25 19:15:51.168091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.169043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.632 [2024-07-25 19:15:51.169066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.169113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.632 [2024-07-25 19:15:51.169132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.169171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.632 [2024-07-25 19:15:51.169187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.169209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.632 [2024-07-25 19:15:51.169225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.169248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.632 [2024-07-25 19:15:51.169264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.169286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.632 [2024-07-25 19:15:51.169301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.169323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.632 [2024-07-25 19:15:51.169339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.169367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.632 [2024-07-25 19:15:51.169383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.169406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.632 [2024-07-25 19:15:51.169422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.169444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.632 [2024-07-25 19:15:51.169460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.169482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.632 [2024-07-25 19:15:51.169498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.170132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.632 [2024-07-25 19:15:51.170156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:01.632 [2024-07-25 19:15:51.170183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.632 [2024-07-25 19:15:51.170201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.170223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.633 [2024-07-25 19:15:51.170238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.170260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.633 [2024-07-25 19:15:51.170276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.170297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.633 [2024-07-25 19:15:51.170313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.170335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.633 [2024-07-25 19:15:51.170350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.170385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.633 [2024-07-25 19:15:51.170401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.170422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.633 [2024-07-25 19:15:51.170452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.170481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.633 [2024-07-25 19:15:51.170498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.170520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.633 [2024-07-25 19:15:51.170536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.170558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.633 [2024-07-25 19:15:51.170574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.170595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.633 [2024-07-25 19:15:51.170611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.170633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.633 [2024-07-25 19:15:51.170649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.170670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.633 [2024-07-25 19:15:51.170686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.170708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.633 [2024-07-25 19:15:51.170724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.170746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.633 [2024-07-25 19:15:51.170762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.170784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.633 [2024-07-25 19:15:51.170800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.170822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.633 [2024-07-25 19:15:51.170838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.170860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.633 [2024-07-25 19:15:51.170876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.171932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.633 [2024-07-25 19:15:51.171956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.171999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.633 [2024-07-25 19:15:51.172022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.172046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.633 [2024-07-25 19:15:51.172063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.172085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.633 [2024-07-25 19:15:51.172110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.172136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.633 [2024-07-25 19:15:51.172152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.172174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.633 [2024-07-25 19:15:51.172190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.172212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.633 [2024-07-25 19:15:51.172228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.172250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.633 [2024-07-25 19:15:51.172266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.172287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.633 [2024-07-25 19:15:51.172304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.172325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.633 [2024-07-25 19:15:51.172341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.172363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.633 [2024-07-25 19:15:51.172379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.172401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.633 [2024-07-25 19:15:51.172417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.172438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.633 [2024-07-25 19:15:51.172454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.172476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.633 [2024-07-25 19:15:51.172498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:01.633 [2024-07-25 19:15:51.172521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.633 [2024-07-25 19:15:51.172538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.173357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.634 [2024-07-25 19:15:51.173403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.173430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.634 [2024-07-25 19:15:51.173463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.173485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.634 [2024-07-25 19:15:51.173500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.173521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.634 [2024-07-25 19:15:51.173536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.173556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.634 [2024-07-25 19:15:51.173571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.173592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.634 [2024-07-25 19:15:51.173621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.173644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.634 [2024-07-25 19:15:51.173659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.173697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.634 [2024-07-25 19:15:51.173713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.173735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.634 [2024-07-25 19:15:51.173751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.173773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.634 [2024-07-25 19:15:51.173789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.173811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.634 [2024-07-25 19:15:51.173827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.173854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.634 [2024-07-25 19:15:51.173870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.173892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.634 [2024-07-25 19:15:51.173908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.173930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.634 [2024-07-25 19:15:51.173946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.173967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.634 [2024-07-25 19:15:51.173984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.174904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.634 [2024-07-25 19:15:51.174927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.174967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.634 [2024-07-25 19:15:51.174986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.175009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.634 [2024-07-25 19:15:51.175026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.175047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.634 [2024-07-25 19:15:51.175063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.175085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.634 [2024-07-25 19:15:51.175108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.175133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.634 [2024-07-25 19:15:51.175149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.175171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.634 [2024-07-25 19:15:51.175187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.175212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.634 [2024-07-25 19:15:51.175228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.175255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.634 [2024-07-25 19:15:51.175271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.175293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.634 [2024-07-25 19:15:51.175309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.175331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.634 [2024-07-25 19:15:51.175347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.175369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.634 [2024-07-25 19:15:51.175384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.175406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.634 [2024-07-25 19:15:51.175422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.175444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.634 [2024-07-25 19:15:51.175460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.175482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.634 [2024-07-25 19:15:51.175498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.175519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.634 [2024-07-25 19:15:51.175535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.175557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.634 [2024-07-25 19:15:51.175573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.175595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.634 [2024-07-25 19:15:51.175611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.176243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.634 [2024-07-25 19:15:51.176285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.176314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.634 [2024-07-25 19:15:51.176332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.176354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.634 [2024-07-25 19:15:51.176375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.176413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.634 [2024-07-25 19:15:51.176429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.176449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.634 [2024-07-25 19:15:51.176464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:01.634 [2024-07-25 19:15:51.176484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.635 [2024-07-25 19:15:51.176499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.176534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.635 [2024-07-25 19:15:51.176551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.176573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.635 [2024-07-25 19:15:51.176589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.176611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.635 [2024-07-25 19:15:51.176626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.176648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.635 [2024-07-25 19:15:51.176664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.176685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.635 [2024-07-25 19:15:51.176701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.176723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.635 [2024-07-25 19:15:51.176739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.176760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.635 [2024-07-25 19:15:51.176776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.176797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.635 [2024-07-25 19:15:51.176813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.176835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.635 [2024-07-25 19:15:51.176855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.177961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.635 [2024-07-25 19:15:51.177986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.178013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.635 [2024-07-25 19:15:51.178031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.178053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.635 [2024-07-25 19:15:51.178069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.178091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.635 [2024-07-25 19:15:51.178115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.178139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.635 [2024-07-25 19:15:51.178155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.178177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.635 [2024-07-25 19:15:51.178193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.178215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.635 [2024-07-25 19:15:51.178231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.178252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.635 [2024-07-25 19:15:51.178268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.178290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.635 [2024-07-25 19:15:51.178322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.178345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.635 [2024-07-25 19:15:51.178360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.178398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.635 [2024-07-25 19:15:51.178414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.178450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.635 [2024-07-25 19:15:51.178466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.178493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.635 [2024-07-25 19:15:51.178510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.178531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.635 [2024-07-25 19:15:51.178546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.178568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.635 [2024-07-25 19:15:51.178583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.178604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.635 [2024-07-25 19:15:51.178619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.178640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.635 [2024-07-25 19:15:51.178655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.178692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.635 [2024-07-25 19:15:51.178707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.178727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.635 [2024-07-25 19:15:51.178742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.178762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.635 [2024-07-25 19:15:51.178777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.178797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.635 [2024-07-25 19:15:51.178812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.178833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.635 [2024-07-25 19:15:51.178848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.178868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.635 [2024-07-25 19:15:51.178883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.178903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.635 [2024-07-25 19:15:51.178918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.178943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.635 [2024-07-25 19:15:51.178958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.178979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.635 [2024-07-25 19:15:51.178994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.179014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.635 [2024-07-25 19:15:51.179029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.179050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.635 [2024-07-25 19:15:51.179065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:01.635 [2024-07-25 19:15:51.179108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.635 [2024-07-25 19:15:51.179127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.181413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.636 [2024-07-25 19:15:51.181439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.181467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.636 [2024-07-25 19:15:51.181484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.181506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.636 [2024-07-25 19:15:51.181522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.181544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.636 [2024-07-25 19:15:51.181560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.181582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.636 [2024-07-25 19:15:51.181598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.181620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.636 [2024-07-25 19:15:51.181636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.181657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.636 [2024-07-25 19:15:51.181673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.181694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.636 [2024-07-25 19:15:51.181718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.181741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.636 [2024-07-25 19:15:51.181757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.181779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.636 [2024-07-25 19:15:51.181794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.181816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.636 [2024-07-25 19:15:51.181832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.181854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.636 [2024-07-25 19:15:51.181869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.181906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.636 [2024-07-25 19:15:51.181923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.181944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.636 [2024-07-25 19:15:51.181975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.181996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.636 [2024-07-25 19:15:51.182011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.182031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.636 [2024-07-25 19:15:51.182046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.182066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.636 [2024-07-25 19:15:51.182081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.182122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.636 [2024-07-25 19:15:51.182140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.182163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.636 [2024-07-25 19:15:51.182194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.182217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.636 [2024-07-25 19:15:51.182238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.182260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.636 [2024-07-25 19:15:51.182277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.182299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.636 [2024-07-25 19:15:51.182315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.182336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.636 [2024-07-25 19:15:51.182352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.182374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.636 [2024-07-25 19:15:51.182390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.182411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.636 [2024-07-25 19:15:51.182427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.182448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.636 [2024-07-25 19:15:51.182464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.182486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.636 [2024-07-25 19:15:51.182502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.182523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.636 [2024-07-25 19:15:51.182539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.182560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.636 [2024-07-25 19:15:51.182576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.182598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.636 [2024-07-25 19:15:51.182614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.182635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.636 [2024-07-25 19:15:51.182651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.182673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.636 [2024-07-25 19:15:51.182689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.182715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.636 [2024-07-25 19:15:51.182732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.182754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.636 [2024-07-25 19:15:51.182770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.182792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.636 [2024-07-25 19:15:51.182808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.182830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.636 [2024-07-25 19:15:51.182846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.183711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.636 [2024-07-25 19:15:51.183737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.183779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.636 [2024-07-25 19:15:51.183800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:01.636 [2024-07-25 19:15:51.183823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.637 [2024-07-25 19:15:51.183840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:01.637 [2024-07-25 19:15:51.183862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.637 [2024-07-25 19:15:51.183878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:01.637 [2024-07-25 19:15:51.183900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.637 [2024-07-25 19:15:51.183915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:01.637 [2024-07-25 19:15:51.183937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.637 [2024-07-25 19:15:51.183953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:01.637 [2024-07-25 19:15:51.183975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.637 [2024-07-25 19:15:51.183991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:01.637 [2024-07-25 19:15:51.185018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.637 [2024-07-25 19:15:51.185040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:01.637 [2024-07-25 19:15:51.185086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.637 [2024-07-25 19:15:51.185125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:01.637 [2024-07-25 19:15:51.185153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.637 [2024-07-25 19:15:51.185169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.637 [2024-07-25 19:15:51.185191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.637 [2024-07-25 19:15:51.185208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.637 [2024-07-25 19:15:51.185230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.637 [2024-07-25 19:15:51.185246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:01.637 [2024-07-25 19:15:51.185268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.637 [2024-07-25 19:15:51.185284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:01.637 [2024-07-25 19:15:51.185306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.637 [2024-07-25 19:15:51.185321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:01.637 [2024-07-25 19:15:51.185343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.637 [2024-07-25 19:15:51.185360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:01.637 Received shutdown signal, test time was about 32.347531 seconds 00:25:01.637 00:25:01.637 Latency(us) 00:25:01.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.637 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:01.637 Verification LBA range: start 0x0 length 0x4000 00:25:01.637 Nvme0n1 : 32.35 8087.26 31.59 0.00 0.00 15799.50 928.43 4026531.84 00:25:01.637 =================================================================================================================== 00:25:01.637 Total : 8087.26 31.59 0.00 0.00 15799.50 928.43 4026531.84 00:25:01.637 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:01.895 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:01.895 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:01.895 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:01.895 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:01.895 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:01.895 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:01.895 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:01.895 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:01.895 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:01.895 rmmod nvme_tcp 00:25:01.895 rmmod nvme_fabrics 00:25:01.895 rmmod nvme_keyring 00:25:01.895 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:01.895 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:01.895 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:01.895 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 982212 ']' 00:25:01.895 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 982212 00:25:01.895 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 982212 ']' 00:25:01.895 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 982212 00:25:01.895 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:01.895 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:01.895 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 982212 00:25:01.895 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:01.895 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:01.895 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 982212' 00:25:01.895 killing process with pid 982212 00:25:01.895 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 982212 00:25:01.895 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 982212 00:25:02.154 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:02.154 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:02.154 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:02.154 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:02.154 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:02.154 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.154 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:02.154 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:04.686 00:25:04.686 real 0m42.433s 00:25:04.686 user 2m4.462s 00:25:04.686 sys 0m11.649s 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:04.686 ************************************ 00:25:04.686 END TEST nvmf_host_multipath_status 00:25:04.686 ************************************ 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.686 ************************************ 00:25:04.686 START TEST nvmf_discovery_remove_ifc 00:25:04.686 ************************************ 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:04.686 * Looking for test storage... 00:25:04.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:04.686 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.687 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:04.687 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:04.687 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:25:04.687 19:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:07.217 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:07.217 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:25:07.217 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:07.217 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:07.217 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:07.217 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:07.217 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:07.217 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:25:07.217 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:07.217 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:25:07.217 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:25:07.217 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:25:07.217 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:25:07.217 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:25:07.217 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:25:07.217 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:07.217 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:07.217 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:07.217 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:07.217 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:07.217 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:07.217 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:07.217 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:07.218 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:07.218 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:07.218 Found net devices under 0000:09:00.0: cvl_0_0 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:07.218 Found net devices under 0000:09:00.1: cvl_0_1 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:07.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:07.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:25:07.218 00:25:07.218 --- 10.0.0.2 ping statistics --- 00:25:07.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.218 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:07.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:07.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:25:07.218 00:25:07.218 --- 10.0.0.1 ping statistics --- 00:25:07.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.218 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=989004 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 989004 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 989004 ']' 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:07.218 19:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:07.218 [2024-07-25 19:15:59.512316] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:07.219 [2024-07-25 19:15:59.512404] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:07.219 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.219 [2024-07-25 19:15:59.593197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.476 [2024-07-25 19:15:59.708171] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:07.477 [2024-07-25 19:15:59.708225] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:07.477 [2024-07-25 19:15:59.708241] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:07.477 [2024-07-25 19:15:59.708254] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:07.477 [2024-07-25 19:15:59.708266] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:07.477 [2024-07-25 19:15:59.708300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.043 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:08.043 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:25:08.043 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:08.043 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:08.043 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.043 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.043 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:08.043 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.043 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.043 [2024-07-25 19:16:00.499279] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.043 [2024-07-25 19:16:00.507438] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:08.301 null0 00:25:08.301 [2024-07-25 19:16:00.539423] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.301 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.301 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=989150 00:25:08.301 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:08.301 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 989150 /tmp/host.sock 00:25:08.301 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 989150 ']' 00:25:08.301 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:08.301 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:08.301 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:08.301 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:08.301 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:08.301 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.301 [2024-07-25 19:16:00.612207] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:08.301 [2024-07-25 19:16:00.612289] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid989150 ] 00:25:08.301 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.301 [2024-07-25 19:16:00.686467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.559 [2024-07-25 19:16:00.798572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.559 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:08.559 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:25:08.559 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:08.559 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:08.559 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.559 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.559 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.559 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:08.559 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.559 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.559 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.559 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:08.559 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.559 19:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:09.493 [2024-07-25 19:16:01.947478] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:09.493 [2024-07-25 19:16:01.947523] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:09.493 [2024-07-25 19:16:01.947550] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:09.751 [2024-07-25 19:16:02.073965] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:10.009 [2024-07-25 19:16:02.302483] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:10.009 [2024-07-25 19:16:02.302552] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:10.009 [2024-07-25 19:16:02.302602] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:10.009 [2024-07-25 19:16:02.302630] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:10.009 [2024-07-25 19:16:02.302673] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:10.009 19:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.009 19:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:10.009 19:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:10.009 [2024-07-25 19:16:02.306148] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x21688e0 was disconnected and freed. delete nvme_qpair. 00:25:10.009 19:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.009 19:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:10.009 19:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.009 19:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:10.009 19:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:10.009 19:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:10.009 19:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.009 19:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:10.009 19:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:10.009 19:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:10.009 19:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:10.009 19:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:10.009 19:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.009 19:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:10.009 19:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.009 19:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:10.009 19:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:10.009 19:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:10.010 19:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.010 19:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:10.010 19:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:11.381 19:16:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:11.381 19:16:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:11.381 19:16:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.381 19:16:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:11.381 19:16:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:11.381 19:16:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:11.381 19:16:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:11.381 19:16:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.381 19:16:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:11.381 19:16:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:12.313 19:16:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:12.313 19:16:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:12.313 19:16:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.313 19:16:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:12.313 19:16:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:12.313 19:16:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:12.313 19:16:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:12.313 19:16:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.313 19:16:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:12.313 19:16:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:13.246 19:16:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:13.246 19:16:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.246 19:16:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:13.246 19:16:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.246 19:16:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:13.246 19:16:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:13.246 19:16:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:13.246 19:16:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.246 19:16:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:13.246 19:16:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:14.231 19:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:14.231 19:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:14.231 19:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:14.231 19:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.231 19:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:14.231 19:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:14.231 19:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:14.231 19:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.231 19:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:14.231 19:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:15.163 19:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:15.163 19:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:15.163 19:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:15.163 19:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.163 19:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.163 19:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:15.163 19:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:15.421 19:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.421 19:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:15.421 19:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:15.421 [2024-07-25 19:16:07.743265] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:15.421 [2024-07-25 19:16:07.743342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.422 [2024-07-25 19:16:07.743372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.422 [2024-07-25 19:16:07.743406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.422 [2024-07-25 19:16:07.743419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.422 [2024-07-25 19:16:07.743432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.422 [2024-07-25 19:16:07.743460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.422 [2024-07-25 19:16:07.743477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.422 [2024-07-25 19:16:07.743491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.422 [2024-07-25 19:16:07.743506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.422 [2024-07-25 19:16:07.743521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.422 [2024-07-25 19:16:07.743536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212f320 is same with the state(5) to be set 00:25:15.422 [2024-07-25 19:16:07.753283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212f320 (9): Bad file descriptor 00:25:15.422 [2024-07-25 19:16:07.763328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:16.355 19:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:16.355 19:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:16.355 19:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:16.355 19:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.355 19:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:16.355 19:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:16.355 19:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:16.355 [2024-07-25 19:16:08.786139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:16.355 [2024-07-25 19:16:08.786219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x212f320 with addr=10.0.0.2, port=4420 00:25:16.355 [2024-07-25 19:16:08.786245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212f320 is same with the state(5) to be set 00:25:16.355 [2024-07-25 19:16:08.786295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212f320 (9): Bad file descriptor 00:25:16.355 [2024-07-25 19:16:08.786753] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:16.355 [2024-07-25 19:16:08.786796] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:16.355 [2024-07-25 19:16:08.786813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:16.355 [2024-07-25 19:16:08.786827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:16.355 [2024-07-25 19:16:08.786857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.355 [2024-07-25 19:16:08.786876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:16.355 19:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.355 19:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:16.355 19:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:17.728 [2024-07-25 19:16:09.789387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:17.728 [2024-07-25 19:16:09.789453] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:17.728 [2024-07-25 19:16:09.789469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:17.728 [2024-07-25 19:16:09.789484] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:25:17.728 [2024-07-25 19:16:09.789514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.728 [2024-07-25 19:16:09.789562] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:17.728 [2024-07-25 19:16:09.789608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.728 [2024-07-25 19:16:09.789631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.728 [2024-07-25 19:16:09.789652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.728 [2024-07-25 19:16:09.789667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.728 [2024-07-25 19:16:09.789683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.728 [2024-07-25 19:16:09.789698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.728 [2024-07-25 19:16:09.789714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.728 [2024-07-25 19:16:09.789728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.728 [2024-07-25 19:16:09.789744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.728 [2024-07-25 19:16:09.789759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.728 [2024-07-25 19:16:09.789773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:17.728 [2024-07-25 19:16:09.789904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212e780 (9): Bad file descriptor 00:25:17.728 [2024-07-25 19:16:09.790922] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:17.728 [2024-07-25 19:16:09.790947] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:17.728 19:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:17.728 19:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:17.728 19:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:17.728 19:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.728 19:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:17.729 19:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:17.729 19:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:17.729 19:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.729 19:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:17.729 19:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:17.729 19:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:17.729 19:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:17.729 19:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:17.729 19:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:17.729 19:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:17.729 19:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.729 19:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:17.729 19:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:17.729 19:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:17.729 19:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.729 19:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:17.729 19:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:18.663 19:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:18.663 19:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.663 19:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:18.663 19:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.663 19:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.663 19:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:18.663 19:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:18.663 19:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.663 19:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:18.663 19:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:19.596 [2024-07-25 19:16:11.803797] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:19.596 [2024-07-25 19:16:11.803829] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:19.596 [2024-07-25 19:16:11.803855] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:19.596 [2024-07-25 19:16:11.933367] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:19.596 19:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:19.596 19:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.596 19:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:19.596 19:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.596 19:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:19.596 19:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:19.596 19:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:19.596 19:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.596 [2024-07-25 19:16:11.994298] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:19.596 [2024-07-25 19:16:11.994344] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:19.596 [2024-07-25 19:16:11.994395] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:19.596 [2024-07-25 19:16:11.994418] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:19.596 [2024-07-25 19:16:11.994431] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:19.597 [2024-07-25 19:16:12.001698] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2135120 was disconnected and freed. delete nvme_qpair. 00:25:19.597 19:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:19.597 19:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 989150 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 989150 ']' 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 989150 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 989150 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 989150' 00:25:20.971 killing process with pid 989150 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 989150 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 989150 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:20.971 rmmod nvme_tcp 00:25:20.971 rmmod nvme_fabrics 00:25:20.971 rmmod nvme_keyring 00:25:20.971 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:21.230 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:25:21.230 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:25:21.230 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 989004 ']' 00:25:21.230 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 989004 00:25:21.230 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 989004 ']' 00:25:21.230 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 989004 00:25:21.230 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:25:21.230 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:21.230 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 989004 00:25:21.230 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:21.230 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:21.230 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 989004' 00:25:21.230 killing process with pid 989004 00:25:21.230 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 989004 00:25:21.230 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 989004 00:25:21.488 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:21.488 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:21.488 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:21.488 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:21.488 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:21.488 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.488 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:21.488 19:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.386 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:23.386 00:25:23.386 real 0m19.103s 00:25:23.386 user 0m26.963s 00:25:23.386 sys 0m3.508s 00:25:23.386 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:23.386 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:23.386 ************************************ 00:25:23.386 END TEST nvmf_discovery_remove_ifc 00:25:23.386 ************************************ 00:25:23.386 19:16:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:23.386 19:16:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:23.386 19:16:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:23.386 19:16:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.386 ************************************ 00:25:23.386 START TEST nvmf_identify_kernel_target 00:25:23.386 ************************************ 00:25:23.386 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:23.645 * Looking for test storage... 00:25:23.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:25:23.645 19:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:26.172 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:26.172 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:26.172 Found net devices under 0000:09:00.0: cvl_0_0 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:26.172 Found net devices under 0000:09:00.1: cvl_0_1 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:26.172 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:26.173 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:26.173 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:25:26.173 00:25:26.173 --- 10.0.0.2 ping statistics --- 00:25:26.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.173 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:26.173 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:26.173 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:25:26.173 00:25:26.173 --- 10.0.0.1 ping statistics --- 00:25:26.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.173 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:26.173 19:16:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:27.548 Waiting for block devices as requested 00:25:27.548 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:27.548 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:27.806 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:27.806 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:27.806 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:27.806 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:28.064 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:28.064 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:28.064 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:25:28.322 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:28.322 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:28.322 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:28.581 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:28.581 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:28.581 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:28.581 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:28.840 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:28.840 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:28.840 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:28.840 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:28.840 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:28.840 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:28.840 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:28.840 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:28.840 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:28.840 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:28.840 No valid GPT data, bailing 00:25:28.840 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:28.840 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:28.840 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:28.840 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:28.840 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:28.840 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:28.840 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:28.840 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:28.840 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:28.840 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:28.840 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:28.840 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:28.840 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:28.840 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:25:28.840 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:28.840 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:28.840 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:29.098 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:25:29.098 00:25:29.098 Discovery Log Number of Records 2, Generation counter 2 00:25:29.098 =====Discovery Log Entry 0====== 00:25:29.098 trtype: tcp 00:25:29.098 adrfam: ipv4 00:25:29.098 subtype: current discovery subsystem 00:25:29.098 treq: not specified, sq flow control disable supported 00:25:29.098 portid: 1 00:25:29.098 trsvcid: 4420 00:25:29.098 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:29.098 traddr: 10.0.0.1 00:25:29.098 eflags: none 00:25:29.098 sectype: none 00:25:29.098 =====Discovery Log Entry 1====== 00:25:29.098 trtype: tcp 00:25:29.098 adrfam: ipv4 00:25:29.098 subtype: nvme subsystem 00:25:29.098 treq: not specified, sq flow control disable supported 00:25:29.098 portid: 1 00:25:29.098 trsvcid: 4420 00:25:29.098 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:29.098 traddr: 10.0.0.1 00:25:29.098 eflags: none 00:25:29.098 sectype: none 00:25:29.098 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:29.098 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:29.098 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.098 ===================================================== 00:25:29.098 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:29.098 ===================================================== 00:25:29.098 Controller Capabilities/Features 00:25:29.098 ================================ 00:25:29.098 Vendor ID: 0000 00:25:29.098 Subsystem Vendor ID: 0000 00:25:29.098 Serial Number: 0c7fa48f0fc98595c48d 00:25:29.098 Model Number: Linux 00:25:29.098 Firmware Version: 6.7.0-68 00:25:29.098 Recommended Arb Burst: 0 00:25:29.098 IEEE OUI Identifier: 00 00 00 00:25:29.098 Multi-path I/O 00:25:29.098 May have multiple subsystem ports: No 00:25:29.098 May have multiple controllers: No 00:25:29.098 Associated with SR-IOV VF: No 00:25:29.098 Max Data Transfer Size: Unlimited 00:25:29.098 Max Number of Namespaces: 0 00:25:29.098 Max Number of I/O Queues: 1024 00:25:29.098 NVMe Specification Version (VS): 1.3 00:25:29.098 NVMe Specification Version (Identify): 1.3 00:25:29.098 Maximum Queue Entries: 1024 00:25:29.098 Contiguous Queues Required: No 00:25:29.098 Arbitration Mechanisms Supported 00:25:29.098 Weighted Round Robin: Not Supported 00:25:29.098 Vendor Specific: Not Supported 00:25:29.098 Reset Timeout: 7500 ms 00:25:29.098 Doorbell Stride: 4 bytes 00:25:29.098 NVM Subsystem Reset: Not Supported 00:25:29.098 Command Sets Supported 00:25:29.098 NVM Command Set: Supported 00:25:29.098 Boot Partition: Not Supported 00:25:29.098 Memory Page Size Minimum: 4096 bytes 00:25:29.098 Memory Page Size Maximum: 4096 bytes 00:25:29.098 Persistent Memory Region: Not Supported 00:25:29.098 Optional Asynchronous Events Supported 00:25:29.098 Namespace Attribute Notices: Not Supported 00:25:29.098 Firmware Activation Notices: Not Supported 00:25:29.098 ANA Change Notices: Not Supported 00:25:29.098 PLE Aggregate Log Change Notices: Not Supported 00:25:29.098 LBA Status Info Alert Notices: Not Supported 00:25:29.098 EGE Aggregate Log Change Notices: Not Supported 00:25:29.098 Normal NVM Subsystem Shutdown event: Not Supported 00:25:29.098 Zone Descriptor Change Notices: Not Supported 00:25:29.098 Discovery Log Change Notices: Supported 00:25:29.098 Controller Attributes 00:25:29.098 128-bit Host Identifier: Not Supported 00:25:29.098 Non-Operational Permissive Mode: Not Supported 00:25:29.098 NVM Sets: Not Supported 00:25:29.098 Read Recovery Levels: Not Supported 00:25:29.098 Endurance Groups: Not Supported 00:25:29.098 Predictable Latency Mode: Not Supported 00:25:29.098 Traffic Based Keep ALive: Not Supported 00:25:29.098 Namespace Granularity: Not Supported 00:25:29.098 SQ Associations: Not Supported 00:25:29.098 UUID List: Not Supported 00:25:29.098 Multi-Domain Subsystem: Not Supported 00:25:29.098 Fixed Capacity Management: Not Supported 00:25:29.098 Variable Capacity Management: Not Supported 00:25:29.098 Delete Endurance Group: Not Supported 00:25:29.098 Delete NVM Set: Not Supported 00:25:29.098 Extended LBA Formats Supported: Not Supported 00:25:29.098 Flexible Data Placement Supported: Not Supported 00:25:29.098 00:25:29.098 Controller Memory Buffer Support 00:25:29.098 ================================ 00:25:29.098 Supported: No 00:25:29.098 00:25:29.098 Persistent Memory Region Support 00:25:29.098 ================================ 00:25:29.098 Supported: No 00:25:29.098 00:25:29.098 Admin Command Set Attributes 00:25:29.098 ============================ 00:25:29.098 Security Send/Receive: Not Supported 00:25:29.098 Format NVM: Not Supported 00:25:29.098 Firmware Activate/Download: Not Supported 00:25:29.098 Namespace Management: Not Supported 00:25:29.098 Device Self-Test: Not Supported 00:25:29.098 Directives: Not Supported 00:25:29.098 NVMe-MI: Not Supported 00:25:29.098 Virtualization Management: Not Supported 00:25:29.098 Doorbell Buffer Config: Not Supported 00:25:29.098 Get LBA Status Capability: Not Supported 00:25:29.098 Command & Feature Lockdown Capability: Not Supported 00:25:29.098 Abort Command Limit: 1 00:25:29.098 Async Event Request Limit: 1 00:25:29.098 Number of Firmware Slots: N/A 00:25:29.098 Firmware Slot 1 Read-Only: N/A 00:25:29.098 Firmware Activation Without Reset: N/A 00:25:29.098 Multiple Update Detection Support: N/A 00:25:29.098 Firmware Update Granularity: No Information Provided 00:25:29.098 Per-Namespace SMART Log: No 00:25:29.098 Asymmetric Namespace Access Log Page: Not Supported 00:25:29.098 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:29.098 Command Effects Log Page: Not Supported 00:25:29.098 Get Log Page Extended Data: Supported 00:25:29.098 Telemetry Log Pages: Not Supported 00:25:29.098 Persistent Event Log Pages: Not Supported 00:25:29.098 Supported Log Pages Log Page: May Support 00:25:29.098 Commands Supported & Effects Log Page: Not Supported 00:25:29.098 Feature Identifiers & Effects Log Page:May Support 00:25:29.098 NVMe-MI Commands & Effects Log Page: May Support 00:25:29.098 Data Area 4 for Telemetry Log: Not Supported 00:25:29.098 Error Log Page Entries Supported: 1 00:25:29.098 Keep Alive: Not Supported 00:25:29.098 00:25:29.098 NVM Command Set Attributes 00:25:29.098 ========================== 00:25:29.098 Submission Queue Entry Size 00:25:29.098 Max: 1 00:25:29.098 Min: 1 00:25:29.098 Completion Queue Entry Size 00:25:29.098 Max: 1 00:25:29.098 Min: 1 00:25:29.098 Number of Namespaces: 0 00:25:29.098 Compare Command: Not Supported 00:25:29.098 Write Uncorrectable Command: Not Supported 00:25:29.098 Dataset Management Command: Not Supported 00:25:29.098 Write Zeroes Command: Not Supported 00:25:29.098 Set Features Save Field: Not Supported 00:25:29.098 Reservations: Not Supported 00:25:29.098 Timestamp: Not Supported 00:25:29.098 Copy: Not Supported 00:25:29.098 Volatile Write Cache: Not Present 00:25:29.098 Atomic Write Unit (Normal): 1 00:25:29.098 Atomic Write Unit (PFail): 1 00:25:29.098 Atomic Compare & Write Unit: 1 00:25:29.098 Fused Compare & Write: Not Supported 00:25:29.098 Scatter-Gather List 00:25:29.098 SGL Command Set: Supported 00:25:29.098 SGL Keyed: Not Supported 00:25:29.098 SGL Bit Bucket Descriptor: Not Supported 00:25:29.098 SGL Metadata Pointer: Not Supported 00:25:29.098 Oversized SGL: Not Supported 00:25:29.098 SGL Metadata Address: Not Supported 00:25:29.098 SGL Offset: Supported 00:25:29.098 Transport SGL Data Block: Not Supported 00:25:29.098 Replay Protected Memory Block: Not Supported 00:25:29.098 00:25:29.098 Firmware Slot Information 00:25:29.098 ========================= 00:25:29.098 Active slot: 0 00:25:29.098 00:25:29.098 00:25:29.098 Error Log 00:25:29.098 ========= 00:25:29.098 00:25:29.098 Active Namespaces 00:25:29.098 ================= 00:25:29.098 Discovery Log Page 00:25:29.098 ================== 00:25:29.098 Generation Counter: 2 00:25:29.098 Number of Records: 2 00:25:29.098 Record Format: 0 00:25:29.098 00:25:29.098 Discovery Log Entry 0 00:25:29.098 ---------------------- 00:25:29.098 Transport Type: 3 (TCP) 00:25:29.098 Address Family: 1 (IPv4) 00:25:29.098 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:29.098 Entry Flags: 00:25:29.098 Duplicate Returned Information: 0 00:25:29.098 Explicit Persistent Connection Support for Discovery: 0 00:25:29.098 Transport Requirements: 00:25:29.098 Secure Channel: Not Specified 00:25:29.098 Port ID: 1 (0x0001) 00:25:29.098 Controller ID: 65535 (0xffff) 00:25:29.098 Admin Max SQ Size: 32 00:25:29.098 Transport Service Identifier: 4420 00:25:29.098 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:29.098 Transport Address: 10.0.0.1 00:25:29.098 Discovery Log Entry 1 00:25:29.098 ---------------------- 00:25:29.098 Transport Type: 3 (TCP) 00:25:29.098 Address Family: 1 (IPv4) 00:25:29.098 Subsystem Type: 2 (NVM Subsystem) 00:25:29.098 Entry Flags: 00:25:29.098 Duplicate Returned Information: 0 00:25:29.098 Explicit Persistent Connection Support for Discovery: 0 00:25:29.098 Transport Requirements: 00:25:29.098 Secure Channel: Not Specified 00:25:29.098 Port ID: 1 (0x0001) 00:25:29.098 Controller ID: 65535 (0xffff) 00:25:29.098 Admin Max SQ Size: 32 00:25:29.098 Transport Service Identifier: 4420 00:25:29.098 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:29.098 Transport Address: 10.0.0.1 00:25:29.098 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:29.098 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.356 get_feature(0x01) failed 00:25:29.356 get_feature(0x02) failed 00:25:29.356 get_feature(0x04) failed 00:25:29.356 ===================================================== 00:25:29.356 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:29.356 ===================================================== 00:25:29.357 Controller Capabilities/Features 00:25:29.357 ================================ 00:25:29.357 Vendor ID: 0000 00:25:29.357 Subsystem Vendor ID: 0000 00:25:29.357 Serial Number: 5345b105b32c5072347d 00:25:29.357 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:29.357 Firmware Version: 6.7.0-68 00:25:29.357 Recommended Arb Burst: 6 00:25:29.357 IEEE OUI Identifier: 00 00 00 00:25:29.357 Multi-path I/O 00:25:29.357 May have multiple subsystem ports: Yes 00:25:29.357 May have multiple controllers: Yes 00:25:29.357 Associated with SR-IOV VF: No 00:25:29.357 Max Data Transfer Size: Unlimited 00:25:29.357 Max Number of Namespaces: 1024 00:25:29.357 Max Number of I/O Queues: 128 00:25:29.357 NVMe Specification Version (VS): 1.3 00:25:29.357 NVMe Specification Version (Identify): 1.3 00:25:29.357 Maximum Queue Entries: 1024 00:25:29.357 Contiguous Queues Required: No 00:25:29.357 Arbitration Mechanisms Supported 00:25:29.357 Weighted Round Robin: Not Supported 00:25:29.357 Vendor Specific: Not Supported 00:25:29.357 Reset Timeout: 7500 ms 00:25:29.357 Doorbell Stride: 4 bytes 00:25:29.357 NVM Subsystem Reset: Not Supported 00:25:29.357 Command Sets Supported 00:25:29.357 NVM Command Set: Supported 00:25:29.357 Boot Partition: Not Supported 00:25:29.357 Memory Page Size Minimum: 4096 bytes 00:25:29.357 Memory Page Size Maximum: 4096 bytes 00:25:29.357 Persistent Memory Region: Not Supported 00:25:29.357 Optional Asynchronous Events Supported 00:25:29.357 Namespace Attribute Notices: Supported 00:25:29.357 Firmware Activation Notices: Not Supported 00:25:29.357 ANA Change Notices: Supported 00:25:29.357 PLE Aggregate Log Change Notices: Not Supported 00:25:29.357 LBA Status Info Alert Notices: Not Supported 00:25:29.357 EGE Aggregate Log Change Notices: Not Supported 00:25:29.357 Normal NVM Subsystem Shutdown event: Not Supported 00:25:29.357 Zone Descriptor Change Notices: Not Supported 00:25:29.357 Discovery Log Change Notices: Not Supported 00:25:29.357 Controller Attributes 00:25:29.357 128-bit Host Identifier: Supported 00:25:29.357 Non-Operational Permissive Mode: Not Supported 00:25:29.357 NVM Sets: Not Supported 00:25:29.357 Read Recovery Levels: Not Supported 00:25:29.357 Endurance Groups: Not Supported 00:25:29.357 Predictable Latency Mode: Not Supported 00:25:29.357 Traffic Based Keep ALive: Supported 00:25:29.357 Namespace Granularity: Not Supported 00:25:29.357 SQ Associations: Not Supported 00:25:29.357 UUID List: Not Supported 00:25:29.357 Multi-Domain Subsystem: Not Supported 00:25:29.357 Fixed Capacity Management: Not Supported 00:25:29.357 Variable Capacity Management: Not Supported 00:25:29.357 Delete Endurance Group: Not Supported 00:25:29.357 Delete NVM Set: Not Supported 00:25:29.357 Extended LBA Formats Supported: Not Supported 00:25:29.357 Flexible Data Placement Supported: Not Supported 00:25:29.357 00:25:29.357 Controller Memory Buffer Support 00:25:29.357 ================================ 00:25:29.357 Supported: No 00:25:29.357 00:25:29.357 Persistent Memory Region Support 00:25:29.357 ================================ 00:25:29.357 Supported: No 00:25:29.357 00:25:29.357 Admin Command Set Attributes 00:25:29.357 ============================ 00:25:29.357 Security Send/Receive: Not Supported 00:25:29.357 Format NVM: Not Supported 00:25:29.357 Firmware Activate/Download: Not Supported 00:25:29.357 Namespace Management: Not Supported 00:25:29.357 Device Self-Test: Not Supported 00:25:29.357 Directives: Not Supported 00:25:29.357 NVMe-MI: Not Supported 00:25:29.357 Virtualization Management: Not Supported 00:25:29.357 Doorbell Buffer Config: Not Supported 00:25:29.357 Get LBA Status Capability: Not Supported 00:25:29.357 Command & Feature Lockdown Capability: Not Supported 00:25:29.357 Abort Command Limit: 4 00:25:29.357 Async Event Request Limit: 4 00:25:29.357 Number of Firmware Slots: N/A 00:25:29.357 Firmware Slot 1 Read-Only: N/A 00:25:29.357 Firmware Activation Without Reset: N/A 00:25:29.357 Multiple Update Detection Support: N/A 00:25:29.357 Firmware Update Granularity: No Information Provided 00:25:29.357 Per-Namespace SMART Log: Yes 00:25:29.357 Asymmetric Namespace Access Log Page: Supported 00:25:29.357 ANA Transition Time : 10 sec 00:25:29.357 00:25:29.357 Asymmetric Namespace Access Capabilities 00:25:29.357 ANA Optimized State : Supported 00:25:29.357 ANA Non-Optimized State : Supported 00:25:29.357 ANA Inaccessible State : Supported 00:25:29.357 ANA Persistent Loss State : Supported 00:25:29.357 ANA Change State : Supported 00:25:29.357 ANAGRPID is not changed : No 00:25:29.357 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:29.357 00:25:29.357 ANA Group Identifier Maximum : 128 00:25:29.357 Number of ANA Group Identifiers : 128 00:25:29.357 Max Number of Allowed Namespaces : 1024 00:25:29.357 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:29.357 Command Effects Log Page: Supported 00:25:29.357 Get Log Page Extended Data: Supported 00:25:29.357 Telemetry Log Pages: Not Supported 00:25:29.357 Persistent Event Log Pages: Not Supported 00:25:29.357 Supported Log Pages Log Page: May Support 00:25:29.357 Commands Supported & Effects Log Page: Not Supported 00:25:29.357 Feature Identifiers & Effects Log Page:May Support 00:25:29.357 NVMe-MI Commands & Effects Log Page: May Support 00:25:29.357 Data Area 4 for Telemetry Log: Not Supported 00:25:29.357 Error Log Page Entries Supported: 128 00:25:29.357 Keep Alive: Supported 00:25:29.357 Keep Alive Granularity: 1000 ms 00:25:29.357 00:25:29.357 NVM Command Set Attributes 00:25:29.357 ========================== 00:25:29.357 Submission Queue Entry Size 00:25:29.357 Max: 64 00:25:29.357 Min: 64 00:25:29.357 Completion Queue Entry Size 00:25:29.357 Max: 16 00:25:29.357 Min: 16 00:25:29.357 Number of Namespaces: 1024 00:25:29.357 Compare Command: Not Supported 00:25:29.357 Write Uncorrectable Command: Not Supported 00:25:29.357 Dataset Management Command: Supported 00:25:29.357 Write Zeroes Command: Supported 00:25:29.357 Set Features Save Field: Not Supported 00:25:29.357 Reservations: Not Supported 00:25:29.357 Timestamp: Not Supported 00:25:29.357 Copy: Not Supported 00:25:29.357 Volatile Write Cache: Present 00:25:29.357 Atomic Write Unit (Normal): 1 00:25:29.357 Atomic Write Unit (PFail): 1 00:25:29.357 Atomic Compare & Write Unit: 1 00:25:29.357 Fused Compare & Write: Not Supported 00:25:29.357 Scatter-Gather List 00:25:29.357 SGL Command Set: Supported 00:25:29.357 SGL Keyed: Not Supported 00:25:29.357 SGL Bit Bucket Descriptor: Not Supported 00:25:29.357 SGL Metadata Pointer: Not Supported 00:25:29.357 Oversized SGL: Not Supported 00:25:29.357 SGL Metadata Address: Not Supported 00:25:29.357 SGL Offset: Supported 00:25:29.357 Transport SGL Data Block: Not Supported 00:25:29.357 Replay Protected Memory Block: Not Supported 00:25:29.357 00:25:29.357 Firmware Slot Information 00:25:29.357 ========================= 00:25:29.357 Active slot: 0 00:25:29.357 00:25:29.357 Asymmetric Namespace Access 00:25:29.357 =========================== 00:25:29.357 Change Count : 0 00:25:29.357 Number of ANA Group Descriptors : 1 00:25:29.357 ANA Group Descriptor : 0 00:25:29.357 ANA Group ID : 1 00:25:29.357 Number of NSID Values : 1 00:25:29.357 Change Count : 0 00:25:29.357 ANA State : 1 00:25:29.357 Namespace Identifier : 1 00:25:29.357 00:25:29.357 Commands Supported and Effects 00:25:29.357 ============================== 00:25:29.357 Admin Commands 00:25:29.357 -------------- 00:25:29.357 Get Log Page (02h): Supported 00:25:29.357 Identify (06h): Supported 00:25:29.357 Abort (08h): Supported 00:25:29.357 Set Features (09h): Supported 00:25:29.357 Get Features (0Ah): Supported 00:25:29.357 Asynchronous Event Request (0Ch): Supported 00:25:29.357 Keep Alive (18h): Supported 00:25:29.357 I/O Commands 00:25:29.357 ------------ 00:25:29.357 Flush (00h): Supported 00:25:29.357 Write (01h): Supported LBA-Change 00:25:29.357 Read (02h): Supported 00:25:29.357 Write Zeroes (08h): Supported LBA-Change 00:25:29.357 Dataset Management (09h): Supported 00:25:29.357 00:25:29.357 Error Log 00:25:29.357 ========= 00:25:29.357 Entry: 0 00:25:29.357 Error Count: 0x3 00:25:29.357 Submission Queue Id: 0x0 00:25:29.357 Command Id: 0x5 00:25:29.357 Phase Bit: 0 00:25:29.357 Status Code: 0x2 00:25:29.357 Status Code Type: 0x0 00:25:29.358 Do Not Retry: 1 00:25:29.358 Error Location: 0x28 00:25:29.358 LBA: 0x0 00:25:29.358 Namespace: 0x0 00:25:29.358 Vendor Log Page: 0x0 00:25:29.358 ----------- 00:25:29.358 Entry: 1 00:25:29.358 Error Count: 0x2 00:25:29.358 Submission Queue Id: 0x0 00:25:29.358 Command Id: 0x5 00:25:29.358 Phase Bit: 0 00:25:29.358 Status Code: 0x2 00:25:29.358 Status Code Type: 0x0 00:25:29.358 Do Not Retry: 1 00:25:29.358 Error Location: 0x28 00:25:29.358 LBA: 0x0 00:25:29.358 Namespace: 0x0 00:25:29.358 Vendor Log Page: 0x0 00:25:29.358 ----------- 00:25:29.358 Entry: 2 00:25:29.358 Error Count: 0x1 00:25:29.358 Submission Queue Id: 0x0 00:25:29.358 Command Id: 0x4 00:25:29.358 Phase Bit: 0 00:25:29.358 Status Code: 0x2 00:25:29.358 Status Code Type: 0x0 00:25:29.358 Do Not Retry: 1 00:25:29.358 Error Location: 0x28 00:25:29.358 LBA: 0x0 00:25:29.358 Namespace: 0x0 00:25:29.358 Vendor Log Page: 0x0 00:25:29.358 00:25:29.358 Number of Queues 00:25:29.358 ================ 00:25:29.358 Number of I/O Submission Queues: 128 00:25:29.358 Number of I/O Completion Queues: 128 00:25:29.358 00:25:29.358 ZNS Specific Controller Data 00:25:29.358 ============================ 00:25:29.358 Zone Append Size Limit: 0 00:25:29.358 00:25:29.358 00:25:29.358 Active Namespaces 00:25:29.358 ================= 00:25:29.358 get_feature(0x05) failed 00:25:29.358 Namespace ID:1 00:25:29.358 Command Set Identifier: NVM (00h) 00:25:29.358 Deallocate: Supported 00:25:29.358 Deallocated/Unwritten Error: Not Supported 00:25:29.358 Deallocated Read Value: Unknown 00:25:29.358 Deallocate in Write Zeroes: Not Supported 00:25:29.358 Deallocated Guard Field: 0xFFFF 00:25:29.358 Flush: Supported 00:25:29.358 Reservation: Not Supported 00:25:29.358 Namespace Sharing Capabilities: Multiple Controllers 00:25:29.358 Size (in LBAs): 1953525168 (931GiB) 00:25:29.358 Capacity (in LBAs): 1953525168 (931GiB) 00:25:29.358 Utilization (in LBAs): 1953525168 (931GiB) 00:25:29.358 UUID: 2197eb38-f158-4425-b7c4-6c8900bd24cc 00:25:29.358 Thin Provisioning: Not Supported 00:25:29.358 Per-NS Atomic Units: Yes 00:25:29.358 Atomic Boundary Size (Normal): 0 00:25:29.358 Atomic Boundary Size (PFail): 0 00:25:29.358 Atomic Boundary Offset: 0 00:25:29.358 NGUID/EUI64 Never Reused: No 00:25:29.358 ANA group ID: 1 00:25:29.358 Namespace Write Protected: No 00:25:29.358 Number of LBA Formats: 1 00:25:29.358 Current LBA Format: LBA Format #00 00:25:29.358 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:29.358 00:25:29.358 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:29.358 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:29.358 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:29.358 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:29.358 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:29.358 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:29.358 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:29.358 rmmod nvme_tcp 00:25:29.358 rmmod nvme_fabrics 00:25:29.358 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:29.358 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:29.358 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:29.358 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:29.358 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:29.358 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:29.358 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:29.358 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:29.358 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:29.358 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.358 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:29.358 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.309 19:16:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:31.309 19:16:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:31.309 19:16:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:31.309 19:16:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:31.309 19:16:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:31.309 19:16:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:31.309 19:16:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:31.309 19:16:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:31.309 19:16:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:31.309 19:16:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:31.309 19:16:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:32.680 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:32.680 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:32.680 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:32.680 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:32.680 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:32.680 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:32.680 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:32.680 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:32.680 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:32.680 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:32.680 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:32.680 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:32.680 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:32.680 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:32.680 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:32.680 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:33.612 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:25:33.871 00:25:33.871 real 0m10.360s 00:25:33.871 user 0m2.363s 00:25:33.871 sys 0m3.942s 00:25:33.871 19:16:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:33.871 19:16:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:33.871 ************************************ 00:25:33.871 END TEST nvmf_identify_kernel_target 00:25:33.871 ************************************ 00:25:33.871 19:16:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:33.871 19:16:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:33.871 19:16:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:33.871 19:16:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.871 ************************************ 00:25:33.871 START TEST nvmf_auth_host 00:25:33.871 ************************************ 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:33.872 * Looking for test storage... 00:25:33.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:33.872 19:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:36.436 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:36.436 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:36.436 Found net devices under 0000:09:00.0: cvl_0_0 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:36.436 Found net devices under 0000:09:00.1: cvl_0_1 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:36.436 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:36.437 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:36.437 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:36.437 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:36.437 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:36.437 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:36.437 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:36.437 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:36.437 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:36.437 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:36.437 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:36.437 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:36.437 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:36.437 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:36.437 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:36.437 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:36.437 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:36.437 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:36.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:36.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:25:36.437 00:25:36.437 --- 10.0.0.2 ping statistics --- 00:25:36.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.437 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:25:36.437 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:36.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:36.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:25:36.695 00:25:36.695 --- 10.0.0.1 ping statistics --- 00:25:36.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.695 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:25:36.695 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:36.695 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:25:36.695 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:36.695 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:36.695 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:36.695 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:36.695 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:36.695 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:36.695 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:36.695 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:36.695 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:36.695 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:36.695 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.695 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=997269 00:25:36.695 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:36.695 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 997269 00:25:36.695 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 997269 ']' 00:25:36.695 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.695 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:36.695 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.695 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:36.695 19:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.627 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:37.627 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:25:37.627 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:37.627 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:37.627 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.627 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.627 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:37.627 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:37.627 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:37.627 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:37.627 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:37.627 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:37.627 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:37.627 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:37.627 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=65c3736beda9f33b7385aa1fc0a571fc 00:25:37.628 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:37.628 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.yh3 00:25:37.628 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 65c3736beda9f33b7385aa1fc0a571fc 0 00:25:37.628 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 65c3736beda9f33b7385aa1fc0a571fc 0 00:25:37.628 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:37.628 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:37.628 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=65c3736beda9f33b7385aa1fc0a571fc 00:25:37.628 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:37.628 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:37.628 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.yh3 00:25:37.628 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.yh3 00:25:37.628 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.yh3 00:25:37.628 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:37.628 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:37.628 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:37.628 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:37.628 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:37.628 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:37.628 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:37.628 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=379eb5a91421cc71a77233202bb9fc903157e1eb93ebca82834ea36cf279c83a 00:25:37.628 19:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.W0I 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 379eb5a91421cc71a77233202bb9fc903157e1eb93ebca82834ea36cf279c83a 3 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 379eb5a91421cc71a77233202bb9fc903157e1eb93ebca82834ea36cf279c83a 3 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=379eb5a91421cc71a77233202bb9fc903157e1eb93ebca82834ea36cf279c83a 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.W0I 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.W0I 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.W0I 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1ee8a81e8f56c16259a19b17319886d29e5940a4ddfcc81b 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.7ya 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1ee8a81e8f56c16259a19b17319886d29e5940a4ddfcc81b 0 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1ee8a81e8f56c16259a19b17319886d29e5940a4ddfcc81b 0 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1ee8a81e8f56c16259a19b17319886d29e5940a4ddfcc81b 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:37.628 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.7ya 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.7ya 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.7ya 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7614e7a80da5e19418becd50065be1ba4c4bb389314f6160 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.qeX 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7614e7a80da5e19418becd50065be1ba4c4bb389314f6160 2 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7614e7a80da5e19418becd50065be1ba4c4bb389314f6160 2 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7614e7a80da5e19418becd50065be1ba4c4bb389314f6160 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.qeX 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.qeX 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.qeX 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=efcc1f4a49566aae650a7ecae5d08d0e 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.PC2 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key efcc1f4a49566aae650a7ecae5d08d0e 1 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 efcc1f4a49566aae650a7ecae5d08d0e 1 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=efcc1f4a49566aae650a7ecae5d08d0e 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.PC2 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.PC2 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.PC2 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c5cd09e753a90408522f97dbce0ef68b 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.9Rv 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c5cd09e753a90408522f97dbce0ef68b 1 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c5cd09e753a90408522f97dbce0ef68b 1 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c5cd09e753a90408522f97dbce0ef68b 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.9Rv 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.9Rv 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.9Rv 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e78baad56c954e291075717289c633f4b77052d345072c33 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.IcC 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e78baad56c954e291075717289c633f4b77052d345072c33 2 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e78baad56c954e291075717289c633f4b77052d345072c33 2 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e78baad56c954e291075717289c633f4b77052d345072c33 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.IcC 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.IcC 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.IcC 00:25:37.886 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:37.887 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:37.887 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:37.887 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:37.887 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:37.887 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:37.887 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:37.887 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=65bd3c3290c31afee90a580596682b09 00:25:37.887 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:37.887 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Dgz 00:25:37.887 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 65bd3c3290c31afee90a580596682b09 0 00:25:37.887 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 65bd3c3290c31afee90a580596682b09 0 00:25:37.887 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:37.887 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:37.887 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=65bd3c3290c31afee90a580596682b09 00:25:37.887 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:37.887 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:37.887 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Dgz 00:25:37.887 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Dgz 00:25:37.887 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Dgz 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d5719e172740ed9d90350051add7bc4c1bc5d4025b40f8f36ebf71af10efa61f 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.CZZ 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d5719e172740ed9d90350051add7bc4c1bc5d4025b40f8f36ebf71af10efa61f 3 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d5719e172740ed9d90350051add7bc4c1bc5d4025b40f8f36ebf71af10efa61f 3 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d5719e172740ed9d90350051add7bc4c1bc5d4025b40f8f36ebf71af10efa61f 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.CZZ 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.CZZ 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.CZZ 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 997269 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 997269 ']' 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:38.145 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.yh3 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.W0I ]] 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.W0I 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.7ya 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.qeX ]] 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qeX 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.PC2 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.9Rv ]] 00:25:38.403 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9Rv 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.IcC 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Dgz ]] 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Dgz 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.CZZ 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:38.404 19:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:39.776 Waiting for block devices as requested 00:25:39.776 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:40.033 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:40.033 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:40.033 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:40.290 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:40.290 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:40.290 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:40.290 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:40.547 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:25:40.547 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:40.547 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:40.805 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:40.805 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:40.805 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:40.805 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:41.064 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:41.064 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:41.322 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:41.322 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:41.322 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:41.322 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:41.323 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:41.323 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:41.323 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:41.323 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:41.323 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:41.323 No valid GPT data, bailing 00:25:41.323 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:25:41.580 00:25:41.580 Discovery Log Number of Records 2, Generation counter 2 00:25:41.580 =====Discovery Log Entry 0====== 00:25:41.580 trtype: tcp 00:25:41.580 adrfam: ipv4 00:25:41.580 subtype: current discovery subsystem 00:25:41.580 treq: not specified, sq flow control disable supported 00:25:41.580 portid: 1 00:25:41.580 trsvcid: 4420 00:25:41.580 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:41.580 traddr: 10.0.0.1 00:25:41.580 eflags: none 00:25:41.580 sectype: none 00:25:41.580 =====Discovery Log Entry 1====== 00:25:41.580 trtype: tcp 00:25:41.580 adrfam: ipv4 00:25:41.580 subtype: nvme subsystem 00:25:41.580 treq: not specified, sq flow control disable supported 00:25:41.580 portid: 1 00:25:41.580 trsvcid: 4420 00:25:41.580 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:41.580 traddr: 10.0.0.1 00:25:41.580 eflags: none 00:25:41.580 sectype: none 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: ]] 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.580 19:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.837 nvme0n1 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: ]] 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.837 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.095 nvme0n1 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: ]] 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.095 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.095 nvme0n1 00:25:42.096 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.096 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.096 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.096 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.096 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.096 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: ]] 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.354 nvme0n1 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.354 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: ]] 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.613 19:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.613 nvme0n1 00:25:42.613 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.613 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.613 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.613 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.613 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.613 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.613 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.613 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.613 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.613 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.613 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.613 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.613 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:42.613 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.613 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.613 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.613 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:42.613 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:25:42.613 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:42.613 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.613 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.613 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:25:42.613 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:42.613 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:42.613 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.614 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:42.614 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:42.614 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:42.614 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.614 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:42.614 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.614 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.614 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.614 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.614 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.614 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.614 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.614 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.614 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.614 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.614 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.614 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.614 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.614 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.614 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:42.614 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.614 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.873 nvme0n1 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: ]] 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.873 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.132 nvme0n1 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: ]] 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.132 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.390 nvme0n1 00:25:43.390 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.390 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.390 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.390 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.390 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.390 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.390 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.390 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.390 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.390 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.390 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.390 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.390 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:43.390 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.390 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:43.390 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.390 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:43.390 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: ]] 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.391 19:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.649 nvme0n1 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: ]] 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.649 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.907 nvme0n1 00:25:43.907 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.907 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.907 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.907 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.907 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.907 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.907 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.907 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.907 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.907 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.907 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.907 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.907 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:43.907 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.907 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:43.907 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.907 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:43.907 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:25:43.907 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:43.907 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.908 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.166 nvme0n1 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: ]] 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.166 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.167 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.167 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.167 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.167 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:44.167 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.167 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.424 nvme0n1 00:25:44.424 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.424 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.424 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.424 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.424 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.424 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.682 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.682 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.682 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.682 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.682 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.682 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.682 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:44.682 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.682 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:44.682 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: ]] 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.683 19:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.941 nvme0n1 00:25:44.941 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.941 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.941 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.941 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.941 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.941 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.941 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.941 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.941 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.941 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.941 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.941 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.941 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:44.941 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.941 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:44.941 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.941 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:44.941 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: ]] 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.942 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.200 nvme0n1 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: ]] 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.200 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.458 nvme0n1 00:25:45.458 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.458 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.458 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.458 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.458 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.716 19:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.974 nvme0n1 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: ]] 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.974 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:45.975 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.975 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.539 nvme0n1 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: ]] 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.539 19:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.105 nvme0n1 00:25:47.105 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.105 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.105 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.105 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.105 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.105 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.105 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.105 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.105 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.105 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: ]] 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.364 19:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.930 nvme0n1 00:25:47.930 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.930 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.930 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.930 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.930 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.930 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.930 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.930 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.930 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.930 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.930 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.930 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: ]] 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.497 nvme0n1 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.497 19:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.062 nvme0n1 00:25:49.062 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.062 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.062 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.062 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.062 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.062 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.062 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.062 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.062 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.062 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.062 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.062 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:49.062 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.062 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:49.062 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.062 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:49.062 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:49.062 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:49.062 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:25:49.062 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:25:49.062 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: ]] 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.063 19:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.996 nvme0n1 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: ]] 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.996 19:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.929 nvme0n1 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: ]] 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.929 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.189 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.189 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.189 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.189 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.189 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.189 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.189 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.189 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.189 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.189 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.189 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.189 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.189 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:51.189 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.189 19:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.150 nvme0n1 00:25:52.150 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.150 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: ]] 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.151 19:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.083 nvme0n1 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.083 19:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.016 nvme0n1 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: ]] 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.016 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.275 nvme0n1 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: ]] 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.275 nvme0n1 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.275 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: ]] 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.533 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.534 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.534 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.534 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.534 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.534 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:54.534 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.534 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.534 nvme0n1 00:25:54.534 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.534 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.534 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.534 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.534 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.534 19:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.534 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: ]] 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.792 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.793 nvme0n1 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.793 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.051 nvme0n1 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: ]] 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.051 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.310 nvme0n1 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: ]] 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.310 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:55.311 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:55.311 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:55.311 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.311 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:55.311 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.311 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.311 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.311 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.311 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.311 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.311 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.311 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.311 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.311 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.311 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.311 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.311 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.311 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.311 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:55.311 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.311 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.569 nvme0n1 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: ]] 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.569 19:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.569 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.569 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.569 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.569 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.569 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.569 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.569 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.569 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.569 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.569 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.569 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.569 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.569 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:55.569 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.569 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.828 nvme0n1 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: ]] 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.828 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.086 nvme0n1 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.086 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.087 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:56.087 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.087 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.345 nvme0n1 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: ]] 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.345 19:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.911 nvme0n1 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: ]] 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.911 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.169 nvme0n1 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: ]] 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.169 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.427 nvme0n1 00:25:57.427 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.427 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.427 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.427 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.427 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.427 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.427 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.427 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.427 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.427 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.427 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.427 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.428 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:57.428 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.428 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:57.428 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:57.428 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:57.428 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:25:57.428 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:25:57.428 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:57.428 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:57.428 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:25:57.428 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: ]] 00:25:57.428 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:25:57.428 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:57.428 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.428 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:57.428 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:57.428 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:57.428 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.428 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:57.428 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.428 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.686 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.686 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.686 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.686 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.686 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.686 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.686 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.686 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.686 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.686 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.686 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.686 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.686 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:57.686 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.686 19:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.944 nvme0n1 00:25:57.944 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.944 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.944 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.944 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.944 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.944 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.944 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.944 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.944 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.944 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.944 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.944 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.944 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:57.944 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.944 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:57.944 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:57.944 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:57.944 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:25:57.944 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:57.944 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:57.944 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:57.944 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:25:57.944 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:57.944 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:57.944 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.945 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:57.945 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:57.945 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:57.945 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.945 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:57.945 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.945 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.945 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.945 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.945 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.945 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.945 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.945 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.945 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.945 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.945 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.945 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.945 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.945 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.945 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:57.945 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.945 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.203 nvme0n1 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: ]] 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.203 19:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.769 nvme0n1 00:25:58.769 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.769 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.769 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.769 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.769 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.769 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.769 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.769 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.769 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.769 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: ]] 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.027 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.592 nvme0n1 00:25:59.592 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.592 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.592 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.592 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.592 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.592 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.592 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.592 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.592 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.592 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.592 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.592 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.592 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:59.592 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.592 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:59.592 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:59.592 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:59.592 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:25:59.592 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:25:59.592 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:59.592 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:59.592 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:25:59.592 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: ]] 00:25:59.592 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:25:59.593 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:59.593 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.593 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:59.593 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:59.593 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:59.593 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.593 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:59.593 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.593 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.593 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.593 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.593 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.593 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.593 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.593 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.593 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.593 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:59.593 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.593 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:59.593 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:59.593 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:59.593 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:59.593 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.593 19:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.159 nvme0n1 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: ]] 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.159 19:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.725 nvme0n1 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.725 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.291 nvme0n1 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: ]] 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.291 19:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.664 nvme0n1 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: ]] 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.664 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.665 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.665 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.665 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.665 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.665 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.665 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.665 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.665 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.665 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:02.665 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.665 19:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.598 nvme0n1 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: ]] 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.598 19:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.530 nvme0n1 00:26:04.530 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.530 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.530 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.530 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.530 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.530 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.530 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.530 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.530 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.530 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.530 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.530 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.530 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:04.530 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.530 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:04.530 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:04.530 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: ]] 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.531 19:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.463 nvme0n1 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:05.463 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:05.464 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.464 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:05.464 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.464 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.464 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.464 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.464 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.464 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.464 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.464 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.464 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.464 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.464 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.464 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.464 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.464 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.464 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:05.464 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.464 19:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.396 nvme0n1 00:26:06.396 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.396 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.396 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.396 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.396 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.396 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: ]] 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.655 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.655 nvme0n1 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: ]] 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.655 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.914 nvme0n1 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: ]] 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.914 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.172 nvme0n1 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: ]] 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.172 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.173 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.173 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.173 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.173 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.173 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.173 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:07.173 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.173 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.430 nvme0n1 00:26:07.430 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.430 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.430 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.430 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.430 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.430 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.430 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.430 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.430 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.430 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.430 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.430 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.430 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.431 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.689 nvme0n1 00:26:07.689 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.689 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.689 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.689 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.689 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.689 19:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: ]] 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.689 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.948 nvme0n1 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: ]] 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.948 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.207 nvme0n1 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: ]] 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:08.207 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.208 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.208 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.208 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.208 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.208 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.208 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.208 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.208 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.208 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.208 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.208 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.208 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.208 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.208 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:08.208 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.208 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.466 nvme0n1 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: ]] 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.466 19:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.724 nvme0n1 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.724 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.982 nvme0n1 00:26:08.982 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.982 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.982 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.982 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.982 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.982 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.982 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.982 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.982 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.982 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.982 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.982 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: ]] 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.983 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.242 nvme0n1 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: ]] 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.242 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.536 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.536 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.536 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.536 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.536 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.536 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.536 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.536 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.536 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.536 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.536 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.536 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.536 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:09.536 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.536 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.794 nvme0n1 00:26:09.794 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.794 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.794 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.794 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.794 19:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.794 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.794 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.794 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.794 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.794 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.794 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.794 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.794 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:09.794 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.794 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:09.794 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:09.794 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:09.794 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:26:09.794 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:26:09.794 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:09.794 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:09.794 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:26:09.794 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: ]] 00:26:09.794 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:26:09.794 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:09.794 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.794 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:09.794 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:09.794 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:09.795 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.795 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:09.795 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.795 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.795 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.795 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.795 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.795 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.795 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.795 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.795 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.795 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.795 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.795 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.795 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.795 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.795 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:09.795 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.795 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.053 nvme0n1 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: ]] 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.053 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.311 nvme0n1 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.311 19:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.877 nvme0n1 00:26:10.877 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.877 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.877 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.877 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.877 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.877 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.877 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.877 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.877 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.877 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.877 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.877 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:10.877 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.877 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:10.877 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.877 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:10.877 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:10.877 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:10.877 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: ]] 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.878 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.443 nvme0n1 00:26:11.443 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.443 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.443 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.443 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.443 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.443 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.443 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.443 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.443 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.443 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.443 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.443 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.443 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: ]] 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.444 19:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.010 nvme0n1 00:26:12.010 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.010 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: ]] 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.011 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.577 nvme0n1 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: ]] 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.577 19:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.142 nvme0n1 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.142 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.708 nvme0n1 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMzczNmJlZGE5ZjMzYjczODVhYTFmYzBhNTcxZmMFxeo3: 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: ]] 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc5ZWI1YTkxNDIxY2M3MWE3NzIzMzIwMmJiOWZjOTAzMTU3ZTFlYjkzZWJjYTgyODM0ZWEzNmNmMjc5YzgzYf5Ke+g=: 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.708 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.639 nvme0n1 00:26:14.639 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.639 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.639 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.639 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.639 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.896 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.896 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.896 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.896 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.896 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.896 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.896 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: ]] 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.897 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.828 nvme0n1 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWZjYzFmNGE0OTU2NmFhZTY1MGE3ZWNhZTVkMDhkMGWRbe3P: 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: ]] 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVjZDA5ZTc1M2E5MDQwODUyMmY5N2RiY2UwZWY2OGLFxJ0e: 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.828 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.829 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.829 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.829 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:15.829 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.829 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.761 nvme0n1 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmFhZDU2Yzk1NGUyOTEwNzU3MTcyODljNjMzZjRiNzcwNTJkMzQ1MDcyYzMzEO4s1Q==: 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: ]] 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjViZDNjMzI5MGMzMWFmZWU5MGE1ODA1OTY2ODJiMDmUMoOg: 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.761 19:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.693 nvme0n1 00:26:17.693 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.693 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.693 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.693 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.693 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDU3MTllMTcyNzQwZWQ5ZDkwMzUwMDUxYWRkN2JjNGMxYmM1ZDQwMjViNDBmOGYzNmViZjcxYWYxMGVmYTYxZp+5PEI=: 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.950 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.951 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:17.951 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.951 19:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.884 nvme0n1 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVlOGE4MWU4ZjU2YzE2MjU5YTE5YjE3MzE5ODg2ZDI5ZTU5NDBhNGRkZmNjODFiDK1k7Q==: 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: ]] 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzYxNGU3YTgwZGE1ZTE5NDE4YmVjZDUwMDY1YmUxYmE0YzRiYjM4OTMxNGY2MTYwsMdREQ==: 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.884 request: 00:26:18.884 { 00:26:18.884 "name": "nvme0", 00:26:18.884 "trtype": "tcp", 00:26:18.884 "traddr": "10.0.0.1", 00:26:18.884 "adrfam": "ipv4", 00:26:18.884 "trsvcid": "4420", 00:26:18.884 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:18.884 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:18.884 "prchk_reftag": false, 00:26:18.884 "prchk_guard": false, 00:26:18.884 "hdgst": false, 00:26:18.884 "ddgst": false, 00:26:18.884 "method": "bdev_nvme_attach_controller", 00:26:18.884 "req_id": 1 00:26:18.884 } 00:26:18.884 Got JSON-RPC error response 00:26:18.884 response: 00:26:18.884 { 00:26:18.884 "code": -5, 00:26:18.884 "message": "Input/output error" 00:26:18.884 } 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.884 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.143 request: 00:26:19.143 { 00:26:19.143 "name": "nvme0", 00:26:19.143 "trtype": "tcp", 00:26:19.143 "traddr": "10.0.0.1", 00:26:19.143 "adrfam": "ipv4", 00:26:19.143 "trsvcid": "4420", 00:26:19.143 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:19.143 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:19.143 "prchk_reftag": false, 00:26:19.143 "prchk_guard": false, 00:26:19.143 "hdgst": false, 00:26:19.143 "ddgst": false, 00:26:19.143 "dhchap_key": "key2", 00:26:19.143 "method": "bdev_nvme_attach_controller", 00:26:19.143 "req_id": 1 00:26:19.143 } 00:26:19.143 Got JSON-RPC error response 00:26:19.143 response: 00:26:19.143 { 00:26:19.143 "code": -5, 00:26:19.143 "message": "Input/output error" 00:26:19.143 } 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.143 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.144 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.144 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.144 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.144 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:19.144 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:19.144 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:19.144 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:19.144 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:19.144 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:19.144 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:19.144 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:19.144 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.144 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.144 request: 00:26:19.144 { 00:26:19.144 "name": "nvme0", 00:26:19.144 "trtype": "tcp", 00:26:19.144 "traddr": "10.0.0.1", 00:26:19.144 "adrfam": "ipv4", 00:26:19.144 "trsvcid": "4420", 00:26:19.144 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:19.144 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:19.144 "prchk_reftag": false, 00:26:19.144 "prchk_guard": false, 00:26:19.144 "hdgst": false, 00:26:19.144 "ddgst": false, 00:26:19.144 "dhchap_key": "key1", 00:26:19.144 "dhchap_ctrlr_key": "ckey2", 00:26:19.144 "method": "bdev_nvme_attach_controller", 00:26:19.144 "req_id": 1 00:26:19.144 } 00:26:19.144 Got JSON-RPC error response 00:26:19.144 response: 00:26:19.144 { 00:26:19.144 "code": -5, 00:26:19.144 "message": "Input/output error" 00:26:19.144 } 00:26:19.144 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:19.144 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:19.144 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:19.144 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:19.144 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:19.144 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:19.144 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:19.144 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:19.144 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:19.144 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:19.402 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:19.402 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:19.402 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:19.402 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:19.402 rmmod nvme_tcp 00:26:19.402 rmmod nvme_fabrics 00:26:19.402 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:19.402 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:19.402 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:19.402 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 997269 ']' 00:26:19.402 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 997269 00:26:19.402 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 997269 ']' 00:26:19.402 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 997269 00:26:19.402 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:26:19.402 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:19.402 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 997269 00:26:19.402 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:19.402 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:19.402 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 997269' 00:26:19.402 killing process with pid 997269 00:26:19.402 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 997269 00:26:19.402 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 997269 00:26:19.661 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:19.661 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:19.661 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:19.661 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:19.661 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:19.661 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.661 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:19.661 19:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.565 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:21.565 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:21.565 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:21.565 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:21.565 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:21.565 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:21.565 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:21.565 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:21.565 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:21.565 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:21.565 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:21.565 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:21.565 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:23.465 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:23.465 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:23.465 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:23.465 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:23.465 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:23.465 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:23.465 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:23.465 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:23.465 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:23.465 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:23.465 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:23.465 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:23.465 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:23.465 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:23.465 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:23.465 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:24.403 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:26:24.403 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.yh3 /tmp/spdk.key-null.7ya /tmp/spdk.key-sha256.PC2 /tmp/spdk.key-sha384.IcC /tmp/spdk.key-sha512.CZZ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:24.403 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:25.779 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:25.779 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:25.779 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:25.779 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:25.779 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:25.779 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:25.779 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:25.779 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:25.779 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:25.779 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:25.779 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:25.779 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:25.779 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:25.779 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:25.779 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:25.779 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:25.779 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:25.779 00:26:25.779 real 0m51.888s 00:26:25.779 user 0m49.078s 00:26:25.779 sys 0m6.558s 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.779 ************************************ 00:26:25.779 END TEST nvmf_auth_host 00:26:25.779 ************************************ 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.779 ************************************ 00:26:25.779 START TEST nvmf_digest 00:26:25.779 ************************************ 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:25.779 * Looking for test storage... 00:26:25.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:25.779 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:26:26.038 19:17:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:28.569 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:28.569 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:28.569 Found net devices under 0000:09:00.0: cvl_0_0 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:28.569 Found net devices under 0000:09:00.1: cvl_0_1 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.569 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:28.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:26:28.570 00:26:28.570 --- 10.0.0.2 ping statistics --- 00:26:28.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.570 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:28.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:26:28.570 00:26:28.570 --- 10.0.0.1 ping statistics --- 00:26:28.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.570 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:28.570 ************************************ 00:26:28.570 START TEST nvmf_digest_clean 00:26:28.570 ************************************ 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1007556 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1007556 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1007556 ']' 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:28.570 19:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:28.570 [2024-07-25 19:17:20.942148] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:28.570 [2024-07-25 19:17:20.942237] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.570 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.570 [2024-07-25 19:17:21.022507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.829 [2024-07-25 19:17:21.133762] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.829 [2024-07-25 19:17:21.133809] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.829 [2024-07-25 19:17:21.133823] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:28.829 [2024-07-25 19:17:21.133833] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:28.829 [2024-07-25 19:17:21.133843] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.829 [2024-07-25 19:17:21.133867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.829 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:28.829 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:28.829 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:28.829 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:28.829 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:28.829 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.830 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:28.830 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:28.830 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:28.830 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.830 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:28.830 null0 00:26:28.830 [2024-07-25 19:17:21.290783] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.122 [2024-07-25 19:17:21.314977] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.122 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.122 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:29.122 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:29.122 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:29.122 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:29.122 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:29.122 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:29.122 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:29.122 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1007697 00:26:29.122 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1007697 /var/tmp/bperf.sock 00:26:29.122 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:29.122 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1007697 ']' 00:26:29.122 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:29.122 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:29.122 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:29.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:29.122 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:29.122 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:29.122 [2024-07-25 19:17:21.363398] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:29.122 [2024-07-25 19:17:21.363518] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1007697 ] 00:26:29.122 EAL: No free 2048 kB hugepages reported on node 1 00:26:29.122 [2024-07-25 19:17:21.433771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.122 [2024-07-25 19:17:21.541316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.122 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:29.122 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:29.122 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:29.122 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:29.122 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:29.688 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.688 19:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.946 nvme0n1 00:26:30.203 19:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:30.203 19:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:30.203 Running I/O for 2 seconds... 00:26:32.099 00:26:32.099 Latency(us) 00:26:32.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.099 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:32.099 nvme0n1 : 2.00 19260.95 75.24 0.00 0.00 6636.45 2876.30 17573.36 00:26:32.099 =================================================================================================================== 00:26:32.099 Total : 19260.95 75.24 0.00 0.00 6636.45 2876.30 17573.36 00:26:32.099 0 00:26:32.099 19:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:32.099 19:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:32.099 19:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:32.099 19:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:32.099 | select(.opcode=="crc32c") 00:26:32.099 | "\(.module_name) \(.executed)"' 00:26:32.099 19:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:32.664 19:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:32.664 19:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:32.664 19:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:32.664 19:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:32.664 19:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1007697 00:26:32.664 19:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1007697 ']' 00:26:32.664 19:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1007697 00:26:32.665 19:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:32.665 19:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:32.665 19:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1007697 00:26:32.665 19:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:32.665 19:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:32.665 19:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1007697' 00:26:32.665 killing process with pid 1007697 00:26:32.665 19:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1007697 00:26:32.665 Received shutdown signal, test time was about 2.000000 seconds 00:26:32.665 00:26:32.665 Latency(us) 00:26:32.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.665 =================================================================================================================== 00:26:32.665 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:32.665 19:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1007697 00:26:32.923 19:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:32.923 19:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:32.923 19:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:32.923 19:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:32.923 19:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:32.923 19:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:32.923 19:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:32.923 19:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1008112 00:26:32.923 19:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:32.923 19:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1008112 /var/tmp/bperf.sock 00:26:32.923 19:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1008112 ']' 00:26:32.923 19:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:32.923 19:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:32.923 19:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:32.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:32.923 19:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:32.923 19:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:32.923 [2024-07-25 19:17:25.184522] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:32.923 [2024-07-25 19:17:25.184616] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1008112 ] 00:26:32.923 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:32.923 Zero copy mechanism will not be used. 00:26:32.923 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.923 [2024-07-25 19:17:25.251474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.923 [2024-07-25 19:17:25.358956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.923 19:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:32.923 19:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:32.923 19:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:32.923 19:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:32.923 19:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:33.489 19:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:33.489 19:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:33.747 nvme0n1 00:26:33.747 19:17:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:33.747 19:17:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:34.004 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:34.004 Zero copy mechanism will not be used. 00:26:34.004 Running I/O for 2 seconds... 00:26:35.903 00:26:35.903 Latency(us) 00:26:35.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.903 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:35.903 nvme0n1 : 2.00 2605.27 325.66 0.00 0.00 6136.56 5412.79 15825.73 00:26:35.903 =================================================================================================================== 00:26:35.903 Total : 2605.27 325.66 0.00 0.00 6136.56 5412.79 15825.73 00:26:35.903 0 00:26:35.903 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:35.903 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:35.903 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:35.903 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:35.903 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:35.903 | select(.opcode=="crc32c") 00:26:35.903 | "\(.module_name) \(.executed)"' 00:26:36.160 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:36.160 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:36.160 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:36.160 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:36.160 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1008112 00:26:36.160 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1008112 ']' 00:26:36.160 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1008112 00:26:36.161 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:36.161 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:36.161 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1008112 00:26:36.161 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:36.161 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:36.161 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1008112' 00:26:36.161 killing process with pid 1008112 00:26:36.161 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1008112 00:26:36.161 Received shutdown signal, test time was about 2.000000 seconds 00:26:36.161 00:26:36.161 Latency(us) 00:26:36.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.161 =================================================================================================================== 00:26:36.161 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:36.161 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1008112 00:26:36.419 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:36.419 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:36.419 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:36.419 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:36.419 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:36.419 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:36.419 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:36.419 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1008522 00:26:36.419 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:36.419 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1008522 /var/tmp/bperf.sock 00:26:36.419 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1008522 ']' 00:26:36.677 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:36.677 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:36.677 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:36.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:36.677 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:36.677 19:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:36.677 [2024-07-25 19:17:28.933441] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:36.677 [2024-07-25 19:17:28.933530] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1008522 ] 00:26:36.677 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.677 [2024-07-25 19:17:29.001219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.677 [2024-07-25 19:17:29.108920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.677 19:17:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:36.677 19:17:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:36.677 19:17:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:36.677 19:17:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:36.677 19:17:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:37.244 19:17:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:37.244 19:17:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:37.501 nvme0n1 00:26:37.501 19:17:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:37.501 19:17:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:37.501 Running I/O for 2 seconds... 00:26:40.030 00:26:40.030 Latency(us) 00:26:40.030 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.030 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:40.030 nvme0n1 : 2.00 21225.24 82.91 0.00 0.00 6020.77 3568.07 14854.83 00:26:40.030 =================================================================================================================== 00:26:40.030 Total : 21225.24 82.91 0.00 0.00 6020.77 3568.07 14854.83 00:26:40.030 0 00:26:40.030 19:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:40.030 19:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:40.030 19:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:40.030 19:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:40.030 19:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:40.030 | select(.opcode=="crc32c") 00:26:40.030 | "\(.module_name) \(.executed)"' 00:26:40.030 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:40.030 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:40.030 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:40.030 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:40.030 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1008522 00:26:40.030 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1008522 ']' 00:26:40.030 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1008522 00:26:40.030 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:40.030 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:40.030 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1008522 00:26:40.031 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:40.031 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:40.031 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1008522' 00:26:40.031 killing process with pid 1008522 00:26:40.031 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1008522 00:26:40.031 Received shutdown signal, test time was about 2.000000 seconds 00:26:40.031 00:26:40.031 Latency(us) 00:26:40.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.031 =================================================================================================================== 00:26:40.031 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:40.031 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1008522 00:26:40.031 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:40.031 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:40.031 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:40.031 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:40.031 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:40.031 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:40.031 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:40.031 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1008922 00:26:40.031 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:40.031 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1008922 /var/tmp/bperf.sock 00:26:40.031 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1008922 ']' 00:26:40.031 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:40.031 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:40.031 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:40.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:40.031 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:40.031 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:40.289 [2024-07-25 19:17:32.512467] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:40.289 [2024-07-25 19:17:32.512564] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1008922 ] 00:26:40.289 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:40.289 Zero copy mechanism will not be used. 00:26:40.289 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.289 [2024-07-25 19:17:32.587145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.289 [2024-07-25 19:17:32.703604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.289 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:40.289 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:40.289 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:40.289 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:40.289 19:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:40.855 19:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:40.855 19:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:41.118 nvme0n1 00:26:41.118 19:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:41.118 19:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:41.118 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:41.118 Zero copy mechanism will not be used. 00:26:41.118 Running I/O for 2 seconds... 00:26:43.648 00:26:43.648 Latency(us) 00:26:43.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:43.648 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:43.648 nvme0n1 : 2.01 1804.94 225.62 0.00 0.00 8843.68 4708.88 12233.39 00:26:43.648 =================================================================================================================== 00:26:43.648 Total : 1804.94 225.62 0.00 0.00 8843.68 4708.88 12233.39 00:26:43.648 0 00:26:43.648 19:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:43.648 19:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:43.648 19:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:43.648 19:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:43.648 19:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:43.648 | select(.opcode=="crc32c") 00:26:43.648 | "\(.module_name) \(.executed)"' 00:26:43.648 19:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:43.648 19:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:43.648 19:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:43.648 19:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:43.648 19:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1008922 00:26:43.648 19:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1008922 ']' 00:26:43.648 19:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1008922 00:26:43.648 19:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:43.648 19:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:43.648 19:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1008922 00:26:43.648 19:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:43.648 19:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:43.648 19:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1008922' 00:26:43.648 killing process with pid 1008922 00:26:43.648 19:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1008922 00:26:43.648 Received shutdown signal, test time was about 2.000000 seconds 00:26:43.648 00:26:43.648 Latency(us) 00:26:43.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:43.648 =================================================================================================================== 00:26:43.648 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:43.648 19:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1008922 00:26:43.648 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1007556 00:26:43.648 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1007556 ']' 00:26:43.648 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1007556 00:26:43.648 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:43.648 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:43.648 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1007556 00:26:43.648 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:43.648 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:43.648 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1007556' 00:26:43.648 killing process with pid 1007556 00:26:43.648 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1007556 00:26:43.648 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1007556 00:26:44.215 00:26:44.215 real 0m15.520s 00:26:44.215 user 0m31.170s 00:26:44.215 sys 0m3.859s 00:26:44.215 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:44.215 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:44.215 ************************************ 00:26:44.215 END TEST nvmf_digest_clean 00:26:44.215 ************************************ 00:26:44.215 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:44.215 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:44.215 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:44.215 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:44.215 ************************************ 00:26:44.215 START TEST nvmf_digest_error 00:26:44.215 ************************************ 00:26:44.215 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:26:44.215 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:44.215 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:44.215 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:44.215 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:44.215 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1009479 00:26:44.215 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:44.215 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1009479 00:26:44.215 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1009479 ']' 00:26:44.215 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.215 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:44.215 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.215 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:44.215 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:44.215 [2024-07-25 19:17:36.516601] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:44.215 [2024-07-25 19:17:36.516677] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.215 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.215 [2024-07-25 19:17:36.593057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.474 [2024-07-25 19:17:36.706107] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.474 [2024-07-25 19:17:36.706168] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.474 [2024-07-25 19:17:36.706182] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:44.474 [2024-07-25 19:17:36.706193] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:44.474 [2024-07-25 19:17:36.706203] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.474 [2024-07-25 19:17:36.706228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:44.474 [2024-07-25 19:17:36.758734] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:44.474 null0 00:26:44.474 [2024-07-25 19:17:36.877738] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:44.474 [2024-07-25 19:17:36.901944] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1009508 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1009508 /var/tmp/bperf.sock 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1009508 ']' 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:44.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:44.474 19:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:44.732 [2024-07-25 19:17:36.952373] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:44.732 [2024-07-25 19:17:36.952468] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1009508 ] 00:26:44.732 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.732 [2024-07-25 19:17:37.018606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.732 [2024-07-25 19:17:37.127569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.990 19:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:44.990 19:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:44.990 19:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:44.990 19:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:45.248 19:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:45.248 19:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.248 19:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:45.248 19:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.248 19:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:45.248 19:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:45.506 nvme0n1 00:26:45.506 19:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:45.506 19:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.506 19:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:45.506 19:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.506 19:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:45.506 19:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:45.764 Running I/O for 2 seconds... 00:26:45.764 [2024-07-25 19:17:38.109460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:45.764 [2024-07-25 19:17:38.109513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.764 [2024-07-25 19:17:38.109535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.764 [2024-07-25 19:17:38.126119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:45.764 [2024-07-25 19:17:38.126170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.764 [2024-07-25 19:17:38.126188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.764 [2024-07-25 19:17:38.146911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:45.764 [2024-07-25 19:17:38.146950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.764 [2024-07-25 19:17:38.146970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.764 [2024-07-25 19:17:38.167308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:45.764 [2024-07-25 19:17:38.167341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.764 [2024-07-25 19:17:38.167358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.764 [2024-07-25 19:17:38.188507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:45.764 [2024-07-25 19:17:38.188543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.764 [2024-07-25 19:17:38.188563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.764 [2024-07-25 19:17:38.208549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:45.764 [2024-07-25 19:17:38.208586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.764 [2024-07-25 19:17:38.208614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.764 [2024-07-25 19:17:38.228288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:45.764 [2024-07-25 19:17:38.228334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.764 [2024-07-25 19:17:38.228350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.022 [2024-07-25 19:17:38.250067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.022 [2024-07-25 19:17:38.250113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.022 [2024-07-25 19:17:38.250153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.023 [2024-07-25 19:17:38.270426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.023 [2024-07-25 19:17:38.270479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.023 [2024-07-25 19:17:38.270498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.023 [2024-07-25 19:17:38.290237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.023 [2024-07-25 19:17:38.290271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.023 [2024-07-25 19:17:38.290288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.023 [2024-07-25 19:17:38.309834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.023 [2024-07-25 19:17:38.309869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.023 [2024-07-25 19:17:38.309887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.023 [2024-07-25 19:17:38.329646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.023 [2024-07-25 19:17:38.329681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.023 [2024-07-25 19:17:38.329699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.023 [2024-07-25 19:17:38.349469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.023 [2024-07-25 19:17:38.349505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.023 [2024-07-25 19:17:38.349524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.023 [2024-07-25 19:17:38.369217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.023 [2024-07-25 19:17:38.369253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.023 [2024-07-25 19:17:38.369271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.023 [2024-07-25 19:17:38.388484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.023 [2024-07-25 19:17:38.388520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.023 [2024-07-25 19:17:38.388539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.023 [2024-07-25 19:17:38.408091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.023 [2024-07-25 19:17:38.408148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.023 [2024-07-25 19:17:38.408166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.023 [2024-07-25 19:17:38.427718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.023 [2024-07-25 19:17:38.427753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.023 [2024-07-25 19:17:38.427772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.023 [2024-07-25 19:17:38.447450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.023 [2024-07-25 19:17:38.447485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.023 [2024-07-25 19:17:38.447504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.023 [2024-07-25 19:17:38.467274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.023 [2024-07-25 19:17:38.467306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.023 [2024-07-25 19:17:38.467323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.023 [2024-07-25 19:17:38.486725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.023 [2024-07-25 19:17:38.486760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.023 [2024-07-25 19:17:38.486779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.281 [2024-07-25 19:17:38.506584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.281 [2024-07-25 19:17:38.506622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.281 [2024-07-25 19:17:38.506642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.281 [2024-07-25 19:17:38.526014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.281 [2024-07-25 19:17:38.526049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.281 [2024-07-25 19:17:38.526068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.281 [2024-07-25 19:17:38.545021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.281 [2024-07-25 19:17:38.545060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.281 [2024-07-25 19:17:38.545086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.281 [2024-07-25 19:17:38.565837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.281 [2024-07-25 19:17:38.565873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.281 [2024-07-25 19:17:38.565893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.282 [2024-07-25 19:17:38.584539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.282 [2024-07-25 19:17:38.584572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.282 [2024-07-25 19:17:38.584589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.282 [2024-07-25 19:17:38.602418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.282 [2024-07-25 19:17:38.602450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.282 [2024-07-25 19:17:38.602467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.282 [2024-07-25 19:17:38.620395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.282 [2024-07-25 19:17:38.620436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.282 [2024-07-25 19:17:38.620452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.282 [2024-07-25 19:17:38.638704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.282 [2024-07-25 19:17:38.638735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.282 [2024-07-25 19:17:38.638753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.282 [2024-07-25 19:17:38.656956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.282 [2024-07-25 19:17:38.656988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.282 [2024-07-25 19:17:38.657019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.282 [2024-07-25 19:17:38.678573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.282 [2024-07-25 19:17:38.678602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.282 [2024-07-25 19:17:38.678618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.282 [2024-07-25 19:17:38.696395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.282 [2024-07-25 19:17:38.696440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.282 [2024-07-25 19:17:38.696456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.282 [2024-07-25 19:17:38.714734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.282 [2024-07-25 19:17:38.714775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.282 [2024-07-25 19:17:38.714792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.282 [2024-07-25 19:17:38.732615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.282 [2024-07-25 19:17:38.732646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.282 [2024-07-25 19:17:38.732678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.282 [2024-07-25 19:17:38.750636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.282 [2024-07-25 19:17:38.750670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.282 [2024-07-25 19:17:38.750688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.540 [2024-07-25 19:17:38.768568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.540 [2024-07-25 19:17:38.768602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.540 [2024-07-25 19:17:38.768635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.540 [2024-07-25 19:17:38.786644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.540 [2024-07-25 19:17:38.786678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.540 [2024-07-25 19:17:38.786710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.540 [2024-07-25 19:17:38.805026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.540 [2024-07-25 19:17:38.805059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.540 [2024-07-25 19:17:38.805076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.540 [2024-07-25 19:17:38.822966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.540 [2024-07-25 19:17:38.823013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.540 [2024-07-25 19:17:38.823030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.540 [2024-07-25 19:17:38.840616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.540 [2024-07-25 19:17:38.840648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.540 [2024-07-25 19:17:38.840665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.540 [2024-07-25 19:17:38.858548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.540 [2024-07-25 19:17:38.858580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.540 [2024-07-25 19:17:38.858597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.540 [2024-07-25 19:17:38.876316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.540 [2024-07-25 19:17:38.876348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.540 [2024-07-25 19:17:38.876365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.540 [2024-07-25 19:17:38.894230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.540 [2024-07-25 19:17:38.894263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.540 [2024-07-25 19:17:38.894280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.540 [2024-07-25 19:17:38.912282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.540 [2024-07-25 19:17:38.912315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.540 [2024-07-25 19:17:38.912332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.540 [2024-07-25 19:17:38.930217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.540 [2024-07-25 19:17:38.930250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.540 [2024-07-25 19:17:38.930267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.540 [2024-07-25 19:17:38.948132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.540 [2024-07-25 19:17:38.948164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.540 [2024-07-25 19:17:38.948181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.540 [2024-07-25 19:17:38.965984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.540 [2024-07-25 19:17:38.966016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.540 [2024-07-25 19:17:38.966032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.540 [2024-07-25 19:17:38.984031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.540 [2024-07-25 19:17:38.984063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.540 [2024-07-25 19:17:38.984095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.540 [2024-07-25 19:17:39.001980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.541 [2024-07-25 19:17:39.002026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.541 [2024-07-25 19:17:39.002043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.799 [2024-07-25 19:17:39.020193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.799 [2024-07-25 19:17:39.020242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.799 [2024-07-25 19:17:39.020266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.799 [2024-07-25 19:17:39.038065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.799 [2024-07-25 19:17:39.038122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.799 [2024-07-25 19:17:39.038142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.799 [2024-07-25 19:17:39.056146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.799 [2024-07-25 19:17:39.056192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.799 [2024-07-25 19:17:39.056210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.799 [2024-07-25 19:17:39.074051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.799 [2024-07-25 19:17:39.074096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.799 [2024-07-25 19:17:39.074122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.799 [2024-07-25 19:17:39.091703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.799 [2024-07-25 19:17:39.091735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.799 [2024-07-25 19:17:39.091752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.799 [2024-07-25 19:17:39.109554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.799 [2024-07-25 19:17:39.109587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.799 [2024-07-25 19:17:39.109604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.799 [2024-07-25 19:17:39.127612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.799 [2024-07-25 19:17:39.127644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.799 [2024-07-25 19:17:39.127661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.799 [2024-07-25 19:17:39.145523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.799 [2024-07-25 19:17:39.145556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.799 [2024-07-25 19:17:39.145573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.799 [2024-07-25 19:17:39.163605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.799 [2024-07-25 19:17:39.163636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.799 [2024-07-25 19:17:39.163653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.799 [2024-07-25 19:17:39.181632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.799 [2024-07-25 19:17:39.181663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.799 [2024-07-25 19:17:39.181680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.799 [2024-07-25 19:17:39.199711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.799 [2024-07-25 19:17:39.199742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.799 [2024-07-25 19:17:39.199759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.799 [2024-07-25 19:17:39.217880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.799 [2024-07-25 19:17:39.217912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.799 [2024-07-25 19:17:39.217930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.799 [2024-07-25 19:17:39.235849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.799 [2024-07-25 19:17:39.235879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.799 [2024-07-25 19:17:39.235896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.799 [2024-07-25 19:17:39.256988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:46.799 [2024-07-25 19:17:39.257018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.799 [2024-07-25 19:17:39.257035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.058 [2024-07-25 19:17:39.274929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.058 [2024-07-25 19:17:39.274961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.058 [2024-07-25 19:17:39.274977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.058 [2024-07-25 19:17:39.292865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.058 [2024-07-25 19:17:39.292914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.058 [2024-07-25 19:17:39.292931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.058 [2024-07-25 19:17:39.310894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.058 [2024-07-25 19:17:39.310924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.058 [2024-07-25 19:17:39.310940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.058 [2024-07-25 19:17:39.328655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.058 [2024-07-25 19:17:39.328685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.058 [2024-07-25 19:17:39.328707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.058 [2024-07-25 19:17:39.346671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.058 [2024-07-25 19:17:39.346703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.058 [2024-07-25 19:17:39.346720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.058 [2024-07-25 19:17:39.364684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.058 [2024-07-25 19:17:39.364714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.058 [2024-07-25 19:17:39.364729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.058 [2024-07-25 19:17:39.382950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.058 [2024-07-25 19:17:39.382991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.058 [2024-07-25 19:17:39.383006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.058 [2024-07-25 19:17:39.400907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.058 [2024-07-25 19:17:39.400952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.058 [2024-07-25 19:17:39.400968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.058 [2024-07-25 19:17:39.419403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.058 [2024-07-25 19:17:39.419435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.058 [2024-07-25 19:17:39.419467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.058 [2024-07-25 19:17:39.437159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.058 [2024-07-25 19:17:39.437191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.058 [2024-07-25 19:17:39.437209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.058 [2024-07-25 19:17:39.455419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.058 [2024-07-25 19:17:39.455450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.058 [2024-07-25 19:17:39.455467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.058 [2024-07-25 19:17:39.473539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.058 [2024-07-25 19:17:39.473570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.058 [2024-07-25 19:17:39.473586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.058 [2024-07-25 19:17:39.491596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.058 [2024-07-25 19:17:39.491634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.058 [2024-07-25 19:17:39.491653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.058 [2024-07-25 19:17:39.509717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.058 [2024-07-25 19:17:39.509748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.058 [2024-07-25 19:17:39.509764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.058 [2024-07-25 19:17:39.527852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.058 [2024-07-25 19:17:39.527885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.058 [2024-07-25 19:17:39.527902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.316 [2024-07-25 19:17:39.546094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.316 [2024-07-25 19:17:39.546149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.317 [2024-07-25 19:17:39.546168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.317 [2024-07-25 19:17:39.564653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.317 [2024-07-25 19:17:39.564691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.317 [2024-07-25 19:17:39.564710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.317 [2024-07-25 19:17:39.584111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.317 [2024-07-25 19:17:39.584160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.317 [2024-07-25 19:17:39.584177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.317 [2024-07-25 19:17:39.603811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.317 [2024-07-25 19:17:39.603848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.317 [2024-07-25 19:17:39.603867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.317 [2024-07-25 19:17:39.623273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.317 [2024-07-25 19:17:39.623303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.317 [2024-07-25 19:17:39.623319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.317 [2024-07-25 19:17:39.642862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.317 [2024-07-25 19:17:39.642897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.317 [2024-07-25 19:17:39.642916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.317 [2024-07-25 19:17:39.662699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.317 [2024-07-25 19:17:39.662735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.317 [2024-07-25 19:17:39.662754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.317 [2024-07-25 19:17:39.682058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.317 [2024-07-25 19:17:39.682093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.317 [2024-07-25 19:17:39.682121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.317 [2024-07-25 19:17:39.701917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.317 [2024-07-25 19:17:39.701952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.317 [2024-07-25 19:17:39.701970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.317 [2024-07-25 19:17:39.721183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.317 [2024-07-25 19:17:39.721229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.317 [2024-07-25 19:17:39.721246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.317 [2024-07-25 19:17:39.740897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.317 [2024-07-25 19:17:39.740932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.317 [2024-07-25 19:17:39.740951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.317 [2024-07-25 19:17:39.760340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.317 [2024-07-25 19:17:39.760371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.317 [2024-07-25 19:17:39.760388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.317 [2024-07-25 19:17:39.779993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.317 [2024-07-25 19:17:39.780028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.317 [2024-07-25 19:17:39.780047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.575 [2024-07-25 19:17:39.799761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.575 [2024-07-25 19:17:39.799800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.575 [2024-07-25 19:17:39.799820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.575 [2024-07-25 19:17:39.823125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.575 [2024-07-25 19:17:39.823172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.575 [2024-07-25 19:17:39.823195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.575 [2024-07-25 19:17:39.842919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.575 [2024-07-25 19:17:39.842955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.575 [2024-07-25 19:17:39.842974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.575 [2024-07-25 19:17:39.862534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.575 [2024-07-25 19:17:39.862569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.575 [2024-07-25 19:17:39.862588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.575 [2024-07-25 19:17:39.882199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.575 [2024-07-25 19:17:39.882245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.575 [2024-07-25 19:17:39.882262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.575 [2024-07-25 19:17:39.901725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.575 [2024-07-25 19:17:39.901772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.575 [2024-07-25 19:17:39.901791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.575 [2024-07-25 19:17:39.921408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.575 [2024-07-25 19:17:39.921466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.575 [2024-07-25 19:17:39.921484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.575 [2024-07-25 19:17:39.941258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.575 [2024-07-25 19:17:39.941289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.575 [2024-07-25 19:17:39.941321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.575 [2024-07-25 19:17:39.960818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.575 [2024-07-25 19:17:39.960853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.575 [2024-07-25 19:17:39.960873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.576 [2024-07-25 19:17:39.980914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.576 [2024-07-25 19:17:39.980950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-07-25 19:17:39.980969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.576 [2024-07-25 19:17:40.000430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.576 [2024-07-25 19:17:40.000466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-07-25 19:17:40.000498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.576 [2024-07-25 19:17:40.020420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.576 [2024-07-25 19:17:40.020511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-07-25 19:17:40.020542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.576 [2024-07-25 19:17:40.039628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.576 [2024-07-25 19:17:40.039675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-07-25 19:17:40.039706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.833 [2024-07-25 19:17:40.060201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.833 [2024-07-25 19:17:40.060236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.833 [2024-07-25 19:17:40.060253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.833 [2024-07-25 19:17:40.079729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec8cb0) 00:26:47.833 [2024-07-25 19:17:40.079766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.834 [2024-07-25 19:17:40.079785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.834 00:26:47.834 Latency(us) 00:26:47.834 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:47.834 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:47.834 nvme0n1 : 2.00 13409.05 52.38 0.00 0.00 9531.96 6650.69 25437.68 00:26:47.834 =================================================================================================================== 00:26:47.834 Total : 13409.05 52.38 0.00 0.00 9531.96 6650.69 25437.68 00:26:47.834 0 00:26:47.834 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:47.834 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:47.834 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:47.834 | .driver_specific 00:26:47.834 | .nvme_error 00:26:47.834 | .status_code 00:26:47.834 | .command_transient_transport_error' 00:26:47.834 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:48.091 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 105 > 0 )) 00:26:48.091 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1009508 00:26:48.091 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1009508 ']' 00:26:48.091 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1009508 00:26:48.091 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:48.091 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:48.091 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1009508 00:26:48.091 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:48.091 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:48.091 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1009508' 00:26:48.091 killing process with pid 1009508 00:26:48.091 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1009508 00:26:48.091 Received shutdown signal, test time was about 2.000000 seconds 00:26:48.091 00:26:48.091 Latency(us) 00:26:48.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.091 =================================================================================================================== 00:26:48.091 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:48.091 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1009508 00:26:48.368 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:48.368 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:48.368 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:48.368 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:48.368 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:48.368 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1010028 00:26:48.368 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:48.368 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1010028 /var/tmp/bperf.sock 00:26:48.368 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1010028 ']' 00:26:48.368 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:48.368 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:48.368 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:48.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:48.368 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:48.368 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:48.368 [2024-07-25 19:17:40.698428] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:48.368 [2024-07-25 19:17:40.698534] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010028 ] 00:26:48.368 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:48.368 Zero copy mechanism will not be used. 00:26:48.368 EAL: No free 2048 kB hugepages reported on node 1 00:26:48.368 [2024-07-25 19:17:40.765291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.638 [2024-07-25 19:17:40.875581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.638 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:48.638 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:48.638 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:48.638 19:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:48.895 19:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:48.895 19:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.895 19:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:48.895 19:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.895 19:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:48.895 19:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:49.152 nvme0n1 00:26:49.152 19:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:49.152 19:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.152 19:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:49.152 19:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.152 19:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:49.152 19:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:49.410 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:49.410 Zero copy mechanism will not be used. 00:26:49.410 Running I/O for 2 seconds... 00:26:49.410 [2024-07-25 19:17:41.660366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.410 [2024-07-25 19:17:41.660441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.410 [2024-07-25 19:17:41.660463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.410 [2024-07-25 19:17:41.672188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.410 [2024-07-25 19:17:41.672234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.410 [2024-07-25 19:17:41.672253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.410 [2024-07-25 19:17:41.683637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.410 [2024-07-25 19:17:41.683672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.410 [2024-07-25 19:17:41.683692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.410 [2024-07-25 19:17:41.694974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.410 [2024-07-25 19:17:41.695004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.410 [2024-07-25 19:17:41.695029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.410 [2024-07-25 19:17:41.706630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.410 [2024-07-25 19:17:41.706665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.410 [2024-07-25 19:17:41.706684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.410 [2024-07-25 19:17:41.717865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.410 [2024-07-25 19:17:41.717901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.410 [2024-07-25 19:17:41.717920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.410 [2024-07-25 19:17:41.729207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.411 [2024-07-25 19:17:41.729238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.411 [2024-07-25 19:17:41.729255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.411 [2024-07-25 19:17:41.740869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.411 [2024-07-25 19:17:41.740903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.411 [2024-07-25 19:17:41.740921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.411 [2024-07-25 19:17:41.752481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.411 [2024-07-25 19:17:41.752514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.411 [2024-07-25 19:17:41.752532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.411 [2024-07-25 19:17:41.764009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.411 [2024-07-25 19:17:41.764045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.411 [2024-07-25 19:17:41.764063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.411 [2024-07-25 19:17:41.775477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.411 [2024-07-25 19:17:41.775511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.411 [2024-07-25 19:17:41.775529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.411 [2024-07-25 19:17:41.787781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.411 [2024-07-25 19:17:41.787816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.411 [2024-07-25 19:17:41.787835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.411 [2024-07-25 19:17:41.800798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.411 [2024-07-25 19:17:41.800840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.411 [2024-07-25 19:17:41.800861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.411 [2024-07-25 19:17:41.813431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.411 [2024-07-25 19:17:41.813466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.411 [2024-07-25 19:17:41.813485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.411 [2024-07-25 19:17:41.826009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.411 [2024-07-25 19:17:41.826044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.411 [2024-07-25 19:17:41.826064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.411 [2024-07-25 19:17:41.837521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.411 [2024-07-25 19:17:41.837554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.411 [2024-07-25 19:17:41.837573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.411 [2024-07-25 19:17:41.848897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.411 [2024-07-25 19:17:41.848930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.411 [2024-07-25 19:17:41.848949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.411 [2024-07-25 19:17:41.860257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.411 [2024-07-25 19:17:41.860301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.411 [2024-07-25 19:17:41.860317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.411 [2024-07-25 19:17:41.871816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.411 [2024-07-25 19:17:41.871850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.411 [2024-07-25 19:17:41.871868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.669 [2024-07-25 19:17:41.883267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.669 [2024-07-25 19:17:41.883309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.669 [2024-07-25 19:17:41.883339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.670 [2024-07-25 19:17:41.894851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.670 [2024-07-25 19:17:41.894887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.670 [2024-07-25 19:17:41.894906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.670 [2024-07-25 19:17:41.906432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.670 [2024-07-25 19:17:41.906480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.670 [2024-07-25 19:17:41.906499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.670 [2024-07-25 19:17:41.917970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.670 [2024-07-25 19:17:41.918003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.670 [2024-07-25 19:17:41.918022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.670 [2024-07-25 19:17:41.929381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.670 [2024-07-25 19:17:41.929437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.670 [2024-07-25 19:17:41.929456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.670 [2024-07-25 19:17:41.940908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.670 [2024-07-25 19:17:41.940942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.670 [2024-07-25 19:17:41.940961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.670 [2024-07-25 19:17:41.952858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.670 [2024-07-25 19:17:41.952891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.670 [2024-07-25 19:17:41.952909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.670 [2024-07-25 19:17:41.964021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.670 [2024-07-25 19:17:41.964056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.670 [2024-07-25 19:17:41.964075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.670 [2024-07-25 19:17:41.975798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.670 [2024-07-25 19:17:41.975842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.670 [2024-07-25 19:17:41.975861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.670 [2024-07-25 19:17:41.987880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.670 [2024-07-25 19:17:41.987916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.670 [2024-07-25 19:17:41.987935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.670 [2024-07-25 19:17:41.999291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.670 [2024-07-25 19:17:41.999341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.670 [2024-07-25 19:17:41.999360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.670 [2024-07-25 19:17:42.010793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.670 [2024-07-25 19:17:42.010827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.670 [2024-07-25 19:17:42.010846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.670 [2024-07-25 19:17:42.022551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.670 [2024-07-25 19:17:42.022583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.670 [2024-07-25 19:17:42.022602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.670 [2024-07-25 19:17:42.034289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.670 [2024-07-25 19:17:42.034332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.670 [2024-07-25 19:17:42.034355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.670 [2024-07-25 19:17:42.046098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.670 [2024-07-25 19:17:42.046151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.670 [2024-07-25 19:17:42.046167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.670 [2024-07-25 19:17:42.057726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.670 [2024-07-25 19:17:42.057760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.670 [2024-07-25 19:17:42.057779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.670 [2024-07-25 19:17:42.069339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.670 [2024-07-25 19:17:42.069369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.670 [2024-07-25 19:17:42.069415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.670 [2024-07-25 19:17:42.080987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.670 [2024-07-25 19:17:42.081020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.670 [2024-07-25 19:17:42.081039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.670 [2024-07-25 19:17:42.092760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.670 [2024-07-25 19:17:42.092793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.670 [2024-07-25 19:17:42.092812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.670 [2024-07-25 19:17:42.104367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.670 [2024-07-25 19:17:42.104419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.670 [2024-07-25 19:17:42.104438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.670 [2024-07-25 19:17:42.116038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.670 [2024-07-25 19:17:42.116076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.670 [2024-07-25 19:17:42.116095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.670 [2024-07-25 19:17:42.127731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.670 [2024-07-25 19:17:42.127765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.670 [2024-07-25 19:17:42.127783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.670 [2024-07-25 19:17:42.139297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.670 [2024-07-25 19:17:42.139328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.670 [2024-07-25 19:17:42.139347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.929 [2024-07-25 19:17:42.150814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.929 [2024-07-25 19:17:42.150850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.929 [2024-07-25 19:17:42.150870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.929 [2024-07-25 19:17:42.162414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.929 [2024-07-25 19:17:42.162459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.929 [2024-07-25 19:17:42.162476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.929 [2024-07-25 19:17:42.173998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.929 [2024-07-25 19:17:42.174033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.929 [2024-07-25 19:17:42.174053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.929 [2024-07-25 19:17:42.185330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.929 [2024-07-25 19:17:42.185361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.929 [2024-07-25 19:17:42.185394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.929 [2024-07-25 19:17:42.196936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.929 [2024-07-25 19:17:42.196970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.929 [2024-07-25 19:17:42.196998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.929 [2024-07-25 19:17:42.208838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.929 [2024-07-25 19:17:42.208872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.929 [2024-07-25 19:17:42.208891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.929 [2024-07-25 19:17:42.220416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.929 [2024-07-25 19:17:42.220460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.929 [2024-07-25 19:17:42.220476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.929 [2024-07-25 19:17:42.232343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.929 [2024-07-25 19:17:42.232387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.929 [2024-07-25 19:17:42.232407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.929 [2024-07-25 19:17:42.243999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.929 [2024-07-25 19:17:42.244033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.929 [2024-07-25 19:17:42.244052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.929 [2024-07-25 19:17:42.255579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.929 [2024-07-25 19:17:42.255612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.929 [2024-07-25 19:17:42.255632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.929 [2024-07-25 19:17:42.266941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.929 [2024-07-25 19:17:42.266975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.929 [2024-07-25 19:17:42.266994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.929 [2024-07-25 19:17:42.278692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.929 [2024-07-25 19:17:42.278727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.929 [2024-07-25 19:17:42.278746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.929 [2024-07-25 19:17:42.290403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.929 [2024-07-25 19:17:42.290452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.929 [2024-07-25 19:17:42.290471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.929 [2024-07-25 19:17:42.301935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.929 [2024-07-25 19:17:42.301974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.929 [2024-07-25 19:17:42.301994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.929 [2024-07-25 19:17:42.313439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.929 [2024-07-25 19:17:42.313473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.929 [2024-07-25 19:17:42.313492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.929 [2024-07-25 19:17:42.324985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.929 [2024-07-25 19:17:42.325019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.929 [2024-07-25 19:17:42.325038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.929 [2024-07-25 19:17:42.336765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.929 [2024-07-25 19:17:42.336798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.929 [2024-07-25 19:17:42.336817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.929 [2024-07-25 19:17:42.348241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.929 [2024-07-25 19:17:42.348269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.929 [2024-07-25 19:17:42.348286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.929 [2024-07-25 19:17:42.359875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.929 [2024-07-25 19:17:42.359909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.929 [2024-07-25 19:17:42.359927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.929 [2024-07-25 19:17:42.371429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.929 [2024-07-25 19:17:42.371462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.929 [2024-07-25 19:17:42.371481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.929 [2024-07-25 19:17:42.382984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.929 [2024-07-25 19:17:42.383016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.929 [2024-07-25 19:17:42.383035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.929 [2024-07-25 19:17:42.394574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:49.929 [2024-07-25 19:17:42.394607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.929 [2024-07-25 19:17:42.394626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.188 [2024-07-25 19:17:42.406301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.188 [2024-07-25 19:17:42.406349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.188 [2024-07-25 19:17:42.406367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.188 [2024-07-25 19:17:42.418237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.188 [2024-07-25 19:17:42.418268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.188 [2024-07-25 19:17:42.418284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.188 [2024-07-25 19:17:42.429925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.188 [2024-07-25 19:17:42.429960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.188 [2024-07-25 19:17:42.429979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.188 [2024-07-25 19:17:42.441525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.188 [2024-07-25 19:17:42.441558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.188 [2024-07-25 19:17:42.441577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.188 [2024-07-25 19:17:42.452912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.188 [2024-07-25 19:17:42.452946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.188 [2024-07-25 19:17:42.452965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.188 [2024-07-25 19:17:42.464424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.188 [2024-07-25 19:17:42.464458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.188 [2024-07-25 19:17:42.464477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.188 [2024-07-25 19:17:42.475943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.188 [2024-07-25 19:17:42.475976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.188 [2024-07-25 19:17:42.475993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.188 [2024-07-25 19:17:42.487846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.188 [2024-07-25 19:17:42.487879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.188 [2024-07-25 19:17:42.487898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.188 [2024-07-25 19:17:42.499478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.188 [2024-07-25 19:17:42.499516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.188 [2024-07-25 19:17:42.499536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.188 [2024-07-25 19:17:42.511128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.188 [2024-07-25 19:17:42.511174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.188 [2024-07-25 19:17:42.511191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.188 [2024-07-25 19:17:42.522623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.188 [2024-07-25 19:17:42.522655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.188 [2024-07-25 19:17:42.522674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.188 [2024-07-25 19:17:42.534041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.188 [2024-07-25 19:17:42.534074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.188 [2024-07-25 19:17:42.534092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.188 [2024-07-25 19:17:42.545633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.188 [2024-07-25 19:17:42.545661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.188 [2024-07-25 19:17:42.545677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.188 [2024-07-25 19:17:42.557077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.188 [2024-07-25 19:17:42.557118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.188 [2024-07-25 19:17:42.557138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.188 [2024-07-25 19:17:42.568724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.188 [2024-07-25 19:17:42.568756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.188 [2024-07-25 19:17:42.568774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.188 [2024-07-25 19:17:42.580332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.188 [2024-07-25 19:17:42.580360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.188 [2024-07-25 19:17:42.580376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.188 [2024-07-25 19:17:42.591732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.188 [2024-07-25 19:17:42.591764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.188 [2024-07-25 19:17:42.591782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.188 [2024-07-25 19:17:42.603007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.188 [2024-07-25 19:17:42.603040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.188 [2024-07-25 19:17:42.603058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.188 [2024-07-25 19:17:42.614644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.188 [2024-07-25 19:17:42.614677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.188 [2024-07-25 19:17:42.614696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.188 [2024-07-25 19:17:42.626261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.188 [2024-07-25 19:17:42.626290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.188 [2024-07-25 19:17:42.626306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.188 [2024-07-25 19:17:42.637978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.188 [2024-07-25 19:17:42.638011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.188 [2024-07-25 19:17:42.638029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.188 [2024-07-25 19:17:42.649741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.188 [2024-07-25 19:17:42.649773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.188 [2024-07-25 19:17:42.649792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.447 [2024-07-25 19:17:42.661175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.447 [2024-07-25 19:17:42.661208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.447 [2024-07-25 19:17:42.661225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.447 [2024-07-25 19:17:42.672934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.447 [2024-07-25 19:17:42.672970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.447 [2024-07-25 19:17:42.672990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.447 [2024-07-25 19:17:42.684376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.447 [2024-07-25 19:17:42.684423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.447 [2024-07-25 19:17:42.684443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.447 [2024-07-25 19:17:42.696253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.447 [2024-07-25 19:17:42.696282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.447 [2024-07-25 19:17:42.696305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.447 [2024-07-25 19:17:42.707859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.447 [2024-07-25 19:17:42.707892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.447 [2024-07-25 19:17:42.707910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.447 [2024-07-25 19:17:42.719848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.447 [2024-07-25 19:17:42.719883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.447 [2024-07-25 19:17:42.719903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.447 [2024-07-25 19:17:42.731736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.447 [2024-07-25 19:17:42.731770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.447 [2024-07-25 19:17:42.731789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.447 [2024-07-25 19:17:42.743369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.447 [2024-07-25 19:17:42.743414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.447 [2024-07-25 19:17:42.743434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.447 [2024-07-25 19:17:42.755078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.447 [2024-07-25 19:17:42.755118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.447 [2024-07-25 19:17:42.755139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.447 [2024-07-25 19:17:42.766550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.447 [2024-07-25 19:17:42.766583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.447 [2024-07-25 19:17:42.766602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.447 [2024-07-25 19:17:42.778215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.447 [2024-07-25 19:17:42.778245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.447 [2024-07-25 19:17:42.778262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.447 [2024-07-25 19:17:42.790287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.447 [2024-07-25 19:17:42.790335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.447 [2024-07-25 19:17:42.790352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.447 [2024-07-25 19:17:42.801863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.447 [2024-07-25 19:17:42.801903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.447 [2024-07-25 19:17:42.801923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.447 [2024-07-25 19:17:42.813384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.447 [2024-07-25 19:17:42.813432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.447 [2024-07-25 19:17:42.813451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.447 [2024-07-25 19:17:42.824982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.447 [2024-07-25 19:17:42.825012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.447 [2024-07-25 19:17:42.825045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.447 [2024-07-25 19:17:42.836459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.447 [2024-07-25 19:17:42.836493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.447 [2024-07-25 19:17:42.836512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.447 [2024-07-25 19:17:42.848013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.447 [2024-07-25 19:17:42.848045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.447 [2024-07-25 19:17:42.848061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.447 [2024-07-25 19:17:42.859421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.448 [2024-07-25 19:17:42.859465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.448 [2024-07-25 19:17:42.859485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.448 [2024-07-25 19:17:42.871055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.448 [2024-07-25 19:17:42.871092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.448 [2024-07-25 19:17:42.871122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.448 [2024-07-25 19:17:42.882644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.448 [2024-07-25 19:17:42.882676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.448 [2024-07-25 19:17:42.882707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.448 [2024-07-25 19:17:42.893923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.448 [2024-07-25 19:17:42.893971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.448 [2024-07-25 19:17:42.893995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.448 [2024-07-25 19:17:42.905853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.448 [2024-07-25 19:17:42.905888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.448 [2024-07-25 19:17:42.905906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.448 [2024-07-25 19:17:42.917400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.448 [2024-07-25 19:17:42.917437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.448 [2024-07-25 19:17:42.917458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.707 [2024-07-25 19:17:42.928912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.707 [2024-07-25 19:17:42.928948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.707 [2024-07-25 19:17:42.928968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.707 [2024-07-25 19:17:42.940809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.707 [2024-07-25 19:17:42.940846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.707 [2024-07-25 19:17:42.940865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.707 [2024-07-25 19:17:42.952464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.707 [2024-07-25 19:17:42.952499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.707 [2024-07-25 19:17:42.952517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.707 [2024-07-25 19:17:42.963821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.707 [2024-07-25 19:17:42.963855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.707 [2024-07-25 19:17:42.963873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.707 [2024-07-25 19:17:42.975645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.707 [2024-07-25 19:17:42.975680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.707 [2024-07-25 19:17:42.975708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.707 [2024-07-25 19:17:42.986197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.707 [2024-07-25 19:17:42.986228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.707 [2024-07-25 19:17:42.986245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.707 [2024-07-25 19:17:42.997934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.707 [2024-07-25 19:17:42.997973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.707 [2024-07-25 19:17:42.997993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.707 [2024-07-25 19:17:43.009685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.707 [2024-07-25 19:17:43.009719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.707 [2024-07-25 19:17:43.009738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.707 [2024-07-25 19:17:43.021159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.707 [2024-07-25 19:17:43.021204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.707 [2024-07-25 19:17:43.021223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.707 [2024-07-25 19:17:43.032953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.707 [2024-07-25 19:17:43.032985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.707 [2024-07-25 19:17:43.033004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.707 [2024-07-25 19:17:43.044704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.707 [2024-07-25 19:17:43.044737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.707 [2024-07-25 19:17:43.044756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.707 [2024-07-25 19:17:43.056117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.707 [2024-07-25 19:17:43.056165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.707 [2024-07-25 19:17:43.056181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.707 [2024-07-25 19:17:43.067629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.707 [2024-07-25 19:17:43.067662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.707 [2024-07-25 19:17:43.067681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.707 [2024-07-25 19:17:43.079088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.707 [2024-07-25 19:17:43.079128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.707 [2024-07-25 19:17:43.079163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.707 [2024-07-25 19:17:43.090852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.707 [2024-07-25 19:17:43.090885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.707 [2024-07-25 19:17:43.090904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.707 [2024-07-25 19:17:43.102380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.707 [2024-07-25 19:17:43.102428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.707 [2024-07-25 19:17:43.102447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.707 [2024-07-25 19:17:43.113777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.707 [2024-07-25 19:17:43.113810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.707 [2024-07-25 19:17:43.113834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.707 [2024-07-25 19:17:43.125204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.707 [2024-07-25 19:17:43.125233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.707 [2024-07-25 19:17:43.125250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.707 [2024-07-25 19:17:43.136845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.707 [2024-07-25 19:17:43.136879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.707 [2024-07-25 19:17:43.136901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.707 [2024-07-25 19:17:43.148184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.707 [2024-07-25 19:17:43.148227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.707 [2024-07-25 19:17:43.148244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.707 [2024-07-25 19:17:43.159775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.707 [2024-07-25 19:17:43.159809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.707 [2024-07-25 19:17:43.159828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.707 [2024-07-25 19:17:43.171309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.707 [2024-07-25 19:17:43.171339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.707 [2024-07-25 19:17:43.171356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.966 [2024-07-25 19:17:43.182944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.966 [2024-07-25 19:17:43.182981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.966 [2024-07-25 19:17:43.183002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.966 [2024-07-25 19:17:43.194374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.966 [2024-07-25 19:17:43.194420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.966 [2024-07-25 19:17:43.194446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.966 [2024-07-25 19:17:43.205897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.966 [2024-07-25 19:17:43.205931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.966 [2024-07-25 19:17:43.205949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.966 [2024-07-25 19:17:43.217676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.966 [2024-07-25 19:17:43.217710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.966 [2024-07-25 19:17:43.217728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.966 [2024-07-25 19:17:43.228962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.966 [2024-07-25 19:17:43.228995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.966 [2024-07-25 19:17:43.229015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.966 [2024-07-25 19:17:43.240473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.966 [2024-07-25 19:17:43.240506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.966 [2024-07-25 19:17:43.240536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.966 [2024-07-25 19:17:43.251994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.966 [2024-07-25 19:17:43.252027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.966 [2024-07-25 19:17:43.252046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.966 [2024-07-25 19:17:43.263496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.966 [2024-07-25 19:17:43.263529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.966 [2024-07-25 19:17:43.263547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.966 [2024-07-25 19:17:43.275083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.966 [2024-07-25 19:17:43.275130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.966 [2024-07-25 19:17:43.275167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.966 [2024-07-25 19:17:43.286716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.966 [2024-07-25 19:17:43.286749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.966 [2024-07-25 19:17:43.286768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.966 [2024-07-25 19:17:43.298297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.966 [2024-07-25 19:17:43.298331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.966 [2024-07-25 19:17:43.298356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.966 [2024-07-25 19:17:43.309835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.966 [2024-07-25 19:17:43.309867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.966 [2024-07-25 19:17:43.309886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.966 [2024-07-25 19:17:43.321207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.966 [2024-07-25 19:17:43.321250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.966 [2024-07-25 19:17:43.321267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.966 [2024-07-25 19:17:43.332763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.966 [2024-07-25 19:17:43.332796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.966 [2024-07-25 19:17:43.332814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.966 [2024-07-25 19:17:43.344287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.966 [2024-07-25 19:17:43.344316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.966 [2024-07-25 19:17:43.344331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.966 [2024-07-25 19:17:43.355724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.966 [2024-07-25 19:17:43.355757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.966 [2024-07-25 19:17:43.355776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.966 [2024-07-25 19:17:43.367335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.966 [2024-07-25 19:17:43.367364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.966 [2024-07-25 19:17:43.367381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.966 [2024-07-25 19:17:43.378914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.966 [2024-07-25 19:17:43.378948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.966 [2024-07-25 19:17:43.378967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.966 [2024-07-25 19:17:43.390606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.966 [2024-07-25 19:17:43.390639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.966 [2024-07-25 19:17:43.390663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.966 [2024-07-25 19:17:43.402072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.966 [2024-07-25 19:17:43.402113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.966 [2024-07-25 19:17:43.402148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.966 [2024-07-25 19:17:43.413651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.966 [2024-07-25 19:17:43.413696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.966 [2024-07-25 19:17:43.413713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.966 [2024-07-25 19:17:43.425077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:50.967 [2024-07-25 19:17:43.425118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.967 [2024-07-25 19:17:43.425139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.225 [2024-07-25 19:17:43.436679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:51.225 [2024-07-25 19:17:43.436717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.225 [2024-07-25 19:17:43.436737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.225 [2024-07-25 19:17:43.448366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:51.225 [2024-07-25 19:17:43.448415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.225 [2024-07-25 19:17:43.448435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.225 [2024-07-25 19:17:43.460002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:51.225 [2024-07-25 19:17:43.460035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.225 [2024-07-25 19:17:43.460054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.225 [2024-07-25 19:17:43.471745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:51.225 [2024-07-25 19:17:43.471779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.225 [2024-07-25 19:17:43.471797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.225 [2024-07-25 19:17:43.483587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:51.225 [2024-07-25 19:17:43.483621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.225 [2024-07-25 19:17:43.483639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.225 [2024-07-25 19:17:43.495057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:51.225 [2024-07-25 19:17:43.495096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.225 [2024-07-25 19:17:43.495126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.225 [2024-07-25 19:17:43.506696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:51.225 [2024-07-25 19:17:43.506730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.225 [2024-07-25 19:17:43.506749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.225 [2024-07-25 19:17:43.518176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:51.225 [2024-07-25 19:17:43.518206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.225 [2024-07-25 19:17:43.518222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.225 [2024-07-25 19:17:43.529804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:51.225 [2024-07-25 19:17:43.529839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.225 [2024-07-25 19:17:43.529857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.225 [2024-07-25 19:17:43.541349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:51.225 [2024-07-25 19:17:43.541393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.225 [2024-07-25 19:17:43.541410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.225 [2024-07-25 19:17:43.553304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:51.225 [2024-07-25 19:17:43.553347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.225 [2024-07-25 19:17:43.553363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.225 [2024-07-25 19:17:43.564988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:51.225 [2024-07-25 19:17:43.565021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.225 [2024-07-25 19:17:43.565040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.225 [2024-07-25 19:17:43.576613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:51.225 [2024-07-25 19:17:43.576647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.225 [2024-07-25 19:17:43.576666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.225 [2024-07-25 19:17:43.588414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:51.225 [2024-07-25 19:17:43.588460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.225 [2024-07-25 19:17:43.588479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.225 [2024-07-25 19:17:43.600051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:51.225 [2024-07-25 19:17:43.600084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.225 [2024-07-25 19:17:43.600109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.225 [2024-07-25 19:17:43.611948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:51.225 [2024-07-25 19:17:43.611981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.225 [2024-07-25 19:17:43.611999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.225 [2024-07-25 19:17:43.623422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:51.225 [2024-07-25 19:17:43.623451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.225 [2024-07-25 19:17:43.623482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.225 [2024-07-25 19:17:43.635173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:51.225 [2024-07-25 19:17:43.635202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.225 [2024-07-25 19:17:43.635218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.225 [2024-07-25 19:17:43.646820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18bb290) 00:26:51.225 [2024-07-25 19:17:43.646853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.225 [2024-07-25 19:17:43.646871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.225 00:26:51.225 Latency(us) 00:26:51.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:51.225 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:51.225 nvme0n1 : 2.00 2672.84 334.11 0.00 0.00 5981.24 4854.52 13107.20 00:26:51.225 =================================================================================================================== 00:26:51.225 Total : 2672.84 334.11 0.00 0.00 5981.24 4854.52 13107.20 00:26:51.225 0 00:26:51.225 19:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:51.225 19:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:51.225 19:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:51.225 19:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:51.225 | .driver_specific 00:26:51.225 | .nvme_error 00:26:51.225 | .status_code 00:26:51.225 | .command_transient_transport_error' 00:26:51.482 19:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 172 > 0 )) 00:26:51.482 19:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1010028 00:26:51.482 19:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1010028 ']' 00:26:51.482 19:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1010028 00:26:51.482 19:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:51.482 19:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:51.482 19:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1010028 00:26:51.482 19:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:51.482 19:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:51.482 19:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1010028' 00:26:51.482 killing process with pid 1010028 00:26:51.482 19:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1010028 00:26:51.482 Received shutdown signal, test time was about 2.000000 seconds 00:26:51.482 00:26:51.483 Latency(us) 00:26:51.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:51.483 =================================================================================================================== 00:26:51.483 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:51.483 19:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1010028 00:26:52.048 19:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:52.048 19:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:52.048 19:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:52.048 19:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:52.048 19:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:52.048 19:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1010436 00:26:52.048 19:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1010436 /var/tmp/bperf.sock 00:26:52.048 19:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:52.048 19:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1010436 ']' 00:26:52.048 19:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:52.048 19:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:52.048 19:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:52.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:52.048 19:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:52.048 19:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:52.048 [2024-07-25 19:17:44.269857] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:52.048 [2024-07-25 19:17:44.269955] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010436 ] 00:26:52.048 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.048 [2024-07-25 19:17:44.340117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.048 [2024-07-25 19:17:44.458669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:52.982 19:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:52.982 19:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:52.982 19:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:52.982 19:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:53.239 19:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:53.239 19:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.239 19:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:53.239 19:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.239 19:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:53.239 19:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:53.497 nvme0n1 00:26:53.497 19:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:53.497 19:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.497 19:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:53.755 19:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.755 19:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:53.755 19:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:53.755 Running I/O for 2 seconds... 00:26:53.755 [2024-07-25 19:17:46.088662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190f9f68 00:26:53.755 [2024-07-25 19:17:46.089569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.755 [2024-07-25 19:17:46.089614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.755 [2024-07-25 19:17:46.101799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190f4b08 00:26:53.755 [2024-07-25 19:17:46.102687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.755 [2024-07-25 19:17:46.102723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.755 [2024-07-25 19:17:46.114635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190f8e88 00:26:53.755 [2024-07-25 19:17:46.115530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.755 [2024-07-25 19:17:46.115563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.755 [2024-07-25 19:17:46.127321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190ed4e8 00:26:53.755 [2024-07-25 19:17:46.128230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.755 [2024-07-25 19:17:46.128260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.755 [2024-07-25 19:17:46.140506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190eaef0 00:26:53.755 [2024-07-25 19:17:46.141555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.755 [2024-07-25 19:17:46.141588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:53.755 [2024-07-25 19:17:46.153296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e9e10 00:26:53.755 [2024-07-25 19:17:46.154431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.755 [2024-07-25 19:17:46.154460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:53.755 [2024-07-25 19:17:46.165970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e8d30 00:26:53.755 [2024-07-25 19:17:46.167052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.755 [2024-07-25 19:17:46.167080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:53.755 [2024-07-25 19:17:46.178185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e0ea0 00:26:53.755 [2024-07-25 19:17:46.179322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.755 [2024-07-25 19:17:46.179350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:53.755 [2024-07-25 19:17:46.190903] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e1f80 00:26:53.755 [2024-07-25 19:17:46.191995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.755 [2024-07-25 19:17:46.192026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:53.755 [2024-07-25 19:17:46.203604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e3060 00:26:53.755 [2024-07-25 19:17:46.204657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.755 [2024-07-25 19:17:46.204688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:53.755 [2024-07-25 19:17:46.216571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e6b70 00:26:53.755 [2024-07-25 19:17:46.217634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.755 [2024-07-25 19:17:46.217666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.014 [2024-07-25 19:17:46.229165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e7c50 00:26:54.014 [2024-07-25 19:17:46.230191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.014 [2024-07-25 19:17:46.230222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.014 [2024-07-25 19:17:46.240816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fbcf0 00:26:54.014 [2024-07-25 19:17:46.241774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.014 [2024-07-25 19:17:46.241811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.014 [2024-07-25 19:17:46.252421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190df550 00:26:54.014 [2024-07-25 19:17:46.253385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.014 [2024-07-25 19:17:46.253414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.014 [2024-07-25 19:17:46.264098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e0630 00:26:54.014 [2024-07-25 19:17:46.265131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.014 [2024-07-25 19:17:46.265160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.014 [2024-07-25 19:17:46.275949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190ff3c8 00:26:54.014 [2024-07-25 19:17:46.276925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.014 [2024-07-25 19:17:46.276954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.014 [2024-07-25 19:17:46.287647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fd640 00:26:54.014 [2024-07-25 19:17:46.288691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.014 [2024-07-25 19:17:46.288719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.014 [2024-07-25 19:17:46.299359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fc560 00:26:54.014 [2024-07-25 19:17:46.300378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.014 [2024-07-25 19:17:46.300407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.014 [2024-07-25 19:17:46.311063] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190f20d8 00:26:54.014 [2024-07-25 19:17:46.312119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.014 [2024-07-25 19:17:46.312148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.014 [2024-07-25 19:17:46.323001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190f31b8 00:26:54.014 [2024-07-25 19:17:46.323978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.014 [2024-07-25 19:17:46.324007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.014 [2024-07-25 19:17:46.335000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190ebb98 00:26:54.014 [2024-07-25 19:17:46.335973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.014 [2024-07-25 19:17:46.336001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.014 [2024-07-25 19:17:46.346698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190eaab8 00:26:54.014 [2024-07-25 19:17:46.347697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.014 [2024-07-25 19:17:46.347726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.014 [2024-07-25 19:17:46.358402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e99d8 00:26:54.014 [2024-07-25 19:17:46.359370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.014 [2024-07-25 19:17:46.359399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.014 [2024-07-25 19:17:46.370210] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e88f8 00:26:54.014 [2024-07-25 19:17:46.371197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.014 [2024-07-25 19:17:46.371225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.014 [2024-07-25 19:17:46.381878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e12d8 00:26:54.014 [2024-07-25 19:17:46.382850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.015 [2024-07-25 19:17:46.382878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.015 [2024-07-25 19:17:46.393540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e23b8 00:26:54.015 [2024-07-25 19:17:46.394545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.015 [2024-07-25 19:17:46.394573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.015 [2024-07-25 19:17:46.405289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e6738 00:26:54.015 [2024-07-25 19:17:46.406293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.015 [2024-07-25 19:17:46.406321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.015 [2024-07-25 19:17:46.416949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e7818 00:26:54.015 [2024-07-25 19:17:46.417976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.015 [2024-07-25 19:17:46.418004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.015 [2024-07-25 19:17:46.428590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fb8b8 00:26:54.015 [2024-07-25 19:17:46.429596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.015 [2024-07-25 19:17:46.429624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.015 [2024-07-25 19:17:46.440223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190df118 00:26:54.015 [2024-07-25 19:17:46.441228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.015 [2024-07-25 19:17:46.441257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.015 [2024-07-25 19:17:46.451908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e01f8 00:26:54.015 [2024-07-25 19:17:46.452875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.015 [2024-07-25 19:17:46.452905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.015 [2024-07-25 19:17:46.463614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fef90 00:26:54.015 [2024-07-25 19:17:46.464605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.015 [2024-07-25 19:17:46.464634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.015 [2024-07-25 19:17:46.475284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fda78 00:26:54.015 [2024-07-25 19:17:46.476240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.015 [2024-07-25 19:17:46.476269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.274 [2024-07-25 19:17:46.487055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fc998 00:26:54.274 [2024-07-25 19:17:46.488060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.274 [2024-07-25 19:17:46.488092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.274 [2024-07-25 19:17:46.498998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190f1ca0 00:26:54.274 [2024-07-25 19:17:46.500034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.274 [2024-07-25 19:17:46.500066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.274 [2024-07-25 19:17:46.510731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190f2d80 00:26:54.274 [2024-07-25 19:17:46.511799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.274 [2024-07-25 19:17:46.511829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.274 [2024-07-25 19:17:46.522456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190ebfd0 00:26:54.274 [2024-07-25 19:17:46.523427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.274 [2024-07-25 19:17:46.523455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.274 [2024-07-25 19:17:46.534113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190eaef0 00:26:54.274 [2024-07-25 19:17:46.535052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.274 [2024-07-25 19:17:46.535080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.274 [2024-07-25 19:17:46.545727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e9e10 00:26:54.274 [2024-07-25 19:17:46.546704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.274 [2024-07-25 19:17:46.546738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.274 [2024-07-25 19:17:46.557421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e8d30 00:26:54.274 [2024-07-25 19:17:46.558395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.275 [2024-07-25 19:17:46.558424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.275 [2024-07-25 19:17:46.569126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e0ea0 00:26:54.275 [2024-07-25 19:17:46.570068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.275 [2024-07-25 19:17:46.570117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.275 [2024-07-25 19:17:46.580857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e1f80 00:26:54.275 [2024-07-25 19:17:46.581803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.275 [2024-07-25 19:17:46.581830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.275 [2024-07-25 19:17:46.592490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e3060 00:26:54.275 [2024-07-25 19:17:46.593518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.275 [2024-07-25 19:17:46.593547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.275 [2024-07-25 19:17:46.604245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e6b70 00:26:54.275 [2024-07-25 19:17:46.605248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.275 [2024-07-25 19:17:46.605274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.275 [2024-07-25 19:17:46.616038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e7c50 00:26:54.275 [2024-07-25 19:17:46.616998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.275 [2024-07-25 19:17:46.617026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.275 [2024-07-25 19:17:46.627892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fbcf0 00:26:54.275 [2024-07-25 19:17:46.628953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.275 [2024-07-25 19:17:46.628981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.275 [2024-07-25 19:17:46.639610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190df550 00:26:54.275 [2024-07-25 19:17:46.640657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.275 [2024-07-25 19:17:46.640685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.275 [2024-07-25 19:17:46.651364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e0630 00:26:54.275 [2024-07-25 19:17:46.652396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.275 [2024-07-25 19:17:46.652424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.275 [2024-07-25 19:17:46.663155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190ff3c8 00:26:54.275 [2024-07-25 19:17:46.664132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.275 [2024-07-25 19:17:46.664160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.275 [2024-07-25 19:17:46.674901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fd640 00:26:54.275 [2024-07-25 19:17:46.675948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.275 [2024-07-25 19:17:46.675976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.275 [2024-07-25 19:17:46.686582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fc560 00:26:54.275 [2024-07-25 19:17:46.687564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.275 [2024-07-25 19:17:46.687592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.275 [2024-07-25 19:17:46.698278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190f20d8 00:26:54.275 [2024-07-25 19:17:46.699295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.275 [2024-07-25 19:17:46.699324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.275 [2024-07-25 19:17:46.710027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190f31b8 00:26:54.275 [2024-07-25 19:17:46.711002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.275 [2024-07-25 19:17:46.711031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.275 [2024-07-25 19:17:46.721762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190ebb98 00:26:54.275 [2024-07-25 19:17:46.722796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.275 [2024-07-25 19:17:46.722846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.275 [2024-07-25 19:17:46.733580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190eaab8 00:26:54.275 [2024-07-25 19:17:46.734574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.275 [2024-07-25 19:17:46.734601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.534 [2024-07-25 19:17:46.745418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e99d8 00:26:54.534 [2024-07-25 19:17:46.746394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.534 [2024-07-25 19:17:46.746425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.534 [2024-07-25 19:17:46.757225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e88f8 00:26:54.534 [2024-07-25 19:17:46.758233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.534 [2024-07-25 19:17:46.758264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.534 [2024-07-25 19:17:46.769038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e12d8 00:26:54.534 [2024-07-25 19:17:46.770013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.534 [2024-07-25 19:17:46.770043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.534 [2024-07-25 19:17:46.780787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e23b8 00:26:54.534 [2024-07-25 19:17:46.781842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.534 [2024-07-25 19:17:46.781870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.534 [2024-07-25 19:17:46.792577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e6738 00:26:54.534 [2024-07-25 19:17:46.793607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.534 [2024-07-25 19:17:46.793637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.534 [2024-07-25 19:17:46.804271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e7818 00:26:54.534 [2024-07-25 19:17:46.805238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.534 [2024-07-25 19:17:46.805267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.534 [2024-07-25 19:17:46.815997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fb8b8 00:26:54.534 [2024-07-25 19:17:46.816973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.534 [2024-07-25 19:17:46.817002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.534 [2024-07-25 19:17:46.827530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190df118 00:26:54.534 [2024-07-25 19:17:46.828505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.534 [2024-07-25 19:17:46.828549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.534 [2024-07-25 19:17:46.839205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e01f8 00:26:54.534 [2024-07-25 19:17:46.840214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.534 [2024-07-25 19:17:46.840242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.534 [2024-07-25 19:17:46.850928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fef90 00:26:54.534 [2024-07-25 19:17:46.851981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.534 [2024-07-25 19:17:46.852016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.534 [2024-07-25 19:17:46.862651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fda78 00:26:54.535 [2024-07-25 19:17:46.863656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.535 [2024-07-25 19:17:46.863686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.535 [2024-07-25 19:17:46.874433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fc998 00:26:54.535 [2024-07-25 19:17:46.875387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.535 [2024-07-25 19:17:46.875415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.535 [2024-07-25 19:17:46.886188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190f1ca0 00:26:54.535 [2024-07-25 19:17:46.887223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.535 [2024-07-25 19:17:46.887251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.535 [2024-07-25 19:17:46.897911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190f2d80 00:26:54.535 [2024-07-25 19:17:46.898950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.535 [2024-07-25 19:17:46.898978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.535 [2024-07-25 19:17:46.909613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190ebfd0 00:26:54.535 [2024-07-25 19:17:46.910620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.535 [2024-07-25 19:17:46.910649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.535 [2024-07-25 19:17:46.921283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190eaef0 00:26:54.535 [2024-07-25 19:17:46.922269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.535 [2024-07-25 19:17:46.922298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.535 [2024-07-25 19:17:46.932819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e9e10 00:26:54.535 [2024-07-25 19:17:46.933803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.535 [2024-07-25 19:17:46.933830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.535 [2024-07-25 19:17:46.944606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e8d30 00:26:54.535 [2024-07-25 19:17:46.945627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.535 [2024-07-25 19:17:46.945656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.535 [2024-07-25 19:17:46.956319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e0ea0 00:26:54.535 [2024-07-25 19:17:46.957280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.535 [2024-07-25 19:17:46.957309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.535 [2024-07-25 19:17:46.968089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e1f80 00:26:54.535 [2024-07-25 19:17:46.969123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.535 [2024-07-25 19:17:46.969154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.535 [2024-07-25 19:17:46.979795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e3060 00:26:54.535 [2024-07-25 19:17:46.980869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.535 [2024-07-25 19:17:46.980898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.535 [2024-07-25 19:17:46.991405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e6b70 00:26:54.535 [2024-07-25 19:17:46.992388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.535 [2024-07-25 19:17:46.992417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.535 [2024-07-25 19:17:47.003229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e7c50 00:26:54.535 [2024-07-25 19:17:47.004215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.535 [2024-07-25 19:17:47.004247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.794 [2024-07-25 19:17:47.015017] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fbcf0 00:26:54.794 [2024-07-25 19:17:47.016056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.794 [2024-07-25 19:17:47.016087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.794 [2024-07-25 19:17:47.026715] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190df550 00:26:54.794 [2024-07-25 19:17:47.027707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.794 [2024-07-25 19:17:47.027737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.794 [2024-07-25 19:17:47.038353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e0630 00:26:54.794 [2024-07-25 19:17:47.039321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.794 [2024-07-25 19:17:47.039350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.794 [2024-07-25 19:17:47.050120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190ff3c8 00:26:54.794 [2024-07-25 19:17:47.051163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.794 [2024-07-25 19:17:47.051192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.794 [2024-07-25 19:17:47.061767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fd640 00:26:54.794 [2024-07-25 19:17:47.062800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.794 [2024-07-25 19:17:47.062844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.794 [2024-07-25 19:17:47.073330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fc560 00:26:54.794 [2024-07-25 19:17:47.074259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.794 [2024-07-25 19:17:47.074287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.794 [2024-07-25 19:17:47.085030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190f20d8 00:26:54.794 [2024-07-25 19:17:47.086060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.794 [2024-07-25 19:17:47.086089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.794 [2024-07-25 19:17:47.096843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190f31b8 00:26:54.794 [2024-07-25 19:17:47.097911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.794 [2024-07-25 19:17:47.097940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.794 [2024-07-25 19:17:47.108530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190ebb98 00:26:54.795 [2024-07-25 19:17:47.109577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.795 [2024-07-25 19:17:47.109606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.795 [2024-07-25 19:17:47.120209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190eaab8 00:26:54.795 [2024-07-25 19:17:47.121151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.795 [2024-07-25 19:17:47.121180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.795 [2024-07-25 19:17:47.131969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e99d8 00:26:54.795 [2024-07-25 19:17:47.133027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.795 [2024-07-25 19:17:47.133054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.795 [2024-07-25 19:17:47.143803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e88f8 00:26:54.795 [2024-07-25 19:17:47.144844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.795 [2024-07-25 19:17:47.144871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.795 [2024-07-25 19:17:47.155470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e12d8 00:26:54.795 [2024-07-25 19:17:47.156420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.795 [2024-07-25 19:17:47.156455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.795 [2024-07-25 19:17:47.167180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e23b8 00:26:54.795 [2024-07-25 19:17:47.168152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.795 [2024-07-25 19:17:47.168181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.795 [2024-07-25 19:17:47.178894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e6738 00:26:54.795 [2024-07-25 19:17:47.179878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.795 [2024-07-25 19:17:47.179907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.795 [2024-07-25 19:17:47.190968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e7818 00:26:54.795 [2024-07-25 19:17:47.192006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.795 [2024-07-25 19:17:47.192039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.795 [2024-07-25 19:17:47.203550] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fb8b8 00:26:54.795 [2024-07-25 19:17:47.204614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.795 [2024-07-25 19:17:47.204646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.795 [2024-07-25 19:17:47.216295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190df118 00:26:54.795 [2024-07-25 19:17:47.217428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.795 [2024-07-25 19:17:47.217459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.795 [2024-07-25 19:17:47.228969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e01f8 00:26:54.795 [2024-07-25 19:17:47.230027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.795 [2024-07-25 19:17:47.230059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.795 [2024-07-25 19:17:47.241695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fef90 00:26:54.795 [2024-07-25 19:17:47.242765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.795 [2024-07-25 19:17:47.242796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.795 [2024-07-25 19:17:47.254178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fda78 00:26:54.795 [2024-07-25 19:17:47.255245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.795 [2024-07-25 19:17:47.255277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.053 [2024-07-25 19:17:47.266830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fc998 00:26:55.053 [2024-07-25 19:17:47.267936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.053 [2024-07-25 19:17:47.267970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.053 [2024-07-25 19:17:47.279488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190f1ca0 00:26:55.053 [2024-07-25 19:17:47.280537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.053 [2024-07-25 19:17:47.280571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.053 [2024-07-25 19:17:47.292072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190f2d80 00:26:55.053 [2024-07-25 19:17:47.293173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.053 [2024-07-25 19:17:47.293216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.053 [2024-07-25 19:17:47.304636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190ebfd0 00:26:55.053 [2024-07-25 19:17:47.305668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.053 [2024-07-25 19:17:47.305699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.053 [2024-07-25 19:17:47.317091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190eaef0 00:26:55.054 [2024-07-25 19:17:47.318214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.054 [2024-07-25 19:17:47.318243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.054 [2024-07-25 19:17:47.329867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e9e10 00:26:55.054 [2024-07-25 19:17:47.330916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.054 [2024-07-25 19:17:47.330948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.054 [2024-07-25 19:17:47.342391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e8d30 00:26:55.054 [2024-07-25 19:17:47.343474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.054 [2024-07-25 19:17:47.343505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.054 [2024-07-25 19:17:47.355125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e0ea0 00:26:55.054 [2024-07-25 19:17:47.356191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.054 [2024-07-25 19:17:47.356235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.054 [2024-07-25 19:17:47.367708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e1f80 00:26:55.054 [2024-07-25 19:17:47.368780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.054 [2024-07-25 19:17:47.368811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.054 [2024-07-25 19:17:47.380251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e3060 00:26:55.054 [2024-07-25 19:17:47.381336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.054 [2024-07-25 19:17:47.381365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.054 [2024-07-25 19:17:47.392852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e6b70 00:26:55.054 [2024-07-25 19:17:47.393920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.054 [2024-07-25 19:17:47.393952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.054 [2024-07-25 19:17:47.405505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e7c50 00:26:55.054 [2024-07-25 19:17:47.406576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.054 [2024-07-25 19:17:47.406605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.054 [2024-07-25 19:17:47.418062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fbcf0 00:26:55.054 [2024-07-25 19:17:47.419125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.054 [2024-07-25 19:17:47.419172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.054 [2024-07-25 19:17:47.430764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190df550 00:26:55.054 [2024-07-25 19:17:47.431817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.054 [2024-07-25 19:17:47.431848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.054 [2024-07-25 19:17:47.443231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e0630 00:26:55.054 [2024-07-25 19:17:47.444331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.054 [2024-07-25 19:17:47.444359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.054 [2024-07-25 19:17:47.455909] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190ff3c8 00:26:55.054 [2024-07-25 19:17:47.456961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.054 [2024-07-25 19:17:47.456993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.054 [2024-07-25 19:17:47.468499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fd640 00:26:55.054 [2024-07-25 19:17:47.469597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.054 [2024-07-25 19:17:47.469628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.054 [2024-07-25 19:17:47.481040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fc560 00:26:55.054 [2024-07-25 19:17:47.482096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.054 [2024-07-25 19:17:47.482139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.054 [2024-07-25 19:17:47.493742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190f20d8 00:26:55.054 [2024-07-25 19:17:47.494792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.054 [2024-07-25 19:17:47.494823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.054 [2024-07-25 19:17:47.506218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190f31b8 00:26:55.054 [2024-07-25 19:17:47.507294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.054 [2024-07-25 19:17:47.507325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.054 [2024-07-25 19:17:47.518834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190ebb98 00:26:55.054 [2024-07-25 19:17:47.519914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.054 [2024-07-25 19:17:47.519960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.312 [2024-07-25 19:17:47.531515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190eaab8 00:26:55.312 [2024-07-25 19:17:47.532585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.312 [2024-07-25 19:17:47.532620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.312 [2024-07-25 19:17:47.544021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e99d8 00:26:55.312 [2024-07-25 19:17:47.545090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.312 [2024-07-25 19:17:47.545128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.312 [2024-07-25 19:17:47.556624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e88f8 00:26:55.312 [2024-07-25 19:17:47.557657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.312 [2024-07-25 19:17:47.557689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.312 [2024-07-25 19:17:47.569056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e12d8 00:26:55.312 [2024-07-25 19:17:47.570136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.312 [2024-07-25 19:17:47.570183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.312 [2024-07-25 19:17:47.581735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e23b8 00:26:55.312 [2024-07-25 19:17:47.582811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.313 [2024-07-25 19:17:47.582843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.313 [2024-07-25 19:17:47.594204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e6738 00:26:55.313 [2024-07-25 19:17:47.595311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.313 [2024-07-25 19:17:47.595343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.313 [2024-07-25 19:17:47.606845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e7818 00:26:55.313 [2024-07-25 19:17:47.607924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.313 [2024-07-25 19:17:47.607955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.313 [2024-07-25 19:17:47.619377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fb8b8 00:26:55.313 [2024-07-25 19:17:47.620586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.313 [2024-07-25 19:17:47.620617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.313 [2024-07-25 19:17:47.631948] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190df118 00:26:55.313 [2024-07-25 19:17:47.633026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.313 [2024-07-25 19:17:47.633058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.313 [2024-07-25 19:17:47.644561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e01f8 00:26:55.313 [2024-07-25 19:17:47.645619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.313 [2024-07-25 19:17:47.645651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.313 [2024-07-25 19:17:47.657127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fef90 00:26:55.313 [2024-07-25 19:17:47.658223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.313 [2024-07-25 19:17:47.658252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.313 [2024-07-25 19:17:47.669790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fda78 00:26:55.313 [2024-07-25 19:17:47.670841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.313 [2024-07-25 19:17:47.670873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.313 [2024-07-25 19:17:47.682392] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fc998 00:26:55.313 [2024-07-25 19:17:47.683471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.313 [2024-07-25 19:17:47.683503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.313 [2024-07-25 19:17:47.694942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190f1ca0 00:26:55.313 [2024-07-25 19:17:47.695992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.313 [2024-07-25 19:17:47.696024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.313 [2024-07-25 19:17:47.707543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190f2d80 00:26:55.313 [2024-07-25 19:17:47.708584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.313 [2024-07-25 19:17:47.708614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.313 [2024-07-25 19:17:47.720040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190ebfd0 00:26:55.313 [2024-07-25 19:17:47.721115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.313 [2024-07-25 19:17:47.721146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.313 [2024-07-25 19:17:47.732820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190eaef0 00:26:55.313 [2024-07-25 19:17:47.733879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.313 [2024-07-25 19:17:47.733910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.313 [2024-07-25 19:17:47.745395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e9e10 00:26:55.313 [2024-07-25 19:17:47.746515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.313 [2024-07-25 19:17:47.746545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.313 [2024-07-25 19:17:47.757938] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e8d30 00:26:55.313 [2024-07-25 19:17:47.759006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.313 [2024-07-25 19:17:47.759038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.313 [2024-07-25 19:17:47.770533] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e0ea0 00:26:55.313 [2024-07-25 19:17:47.771603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.313 [2024-07-25 19:17:47.771634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.313 [2024-07-25 19:17:47.783171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e1f80 00:26:55.571 [2024-07-25 19:17:47.784230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.571 [2024-07-25 19:17:47.784261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.571 [2024-07-25 19:17:47.795825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e3060 00:26:55.571 [2024-07-25 19:17:47.796854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.571 [2024-07-25 19:17:47.796889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.571 [2024-07-25 19:17:47.808373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e6b70 00:26:55.571 [2024-07-25 19:17:47.809455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.571 [2024-07-25 19:17:47.809494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.571 [2024-07-25 19:17:47.820901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e7c50 00:26:55.571 [2024-07-25 19:17:47.821947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.571 [2024-07-25 19:17:47.821978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.572 [2024-07-25 19:17:47.833502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fbcf0 00:26:55.572 [2024-07-25 19:17:47.834571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.572 [2024-07-25 19:17:47.834603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.572 [2024-07-25 19:17:47.845954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190df550 00:26:55.572 [2024-07-25 19:17:47.847019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.572 [2024-07-25 19:17:47.847050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.572 [2024-07-25 19:17:47.858515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e0630 00:26:55.572 [2024-07-25 19:17:47.859582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.572 [2024-07-25 19:17:47.859614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.572 [2024-07-25 19:17:47.871071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190ff3c8 00:26:55.572 [2024-07-25 19:17:47.872139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.572 [2024-07-25 19:17:47.872170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.572 [2024-07-25 19:17:47.883739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fd640 00:26:55.572 [2024-07-25 19:17:47.884764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.572 [2024-07-25 19:17:47.884795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.572 [2024-07-25 19:17:47.896201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fc560 00:26:55.572 [2024-07-25 19:17:47.897352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.572 [2024-07-25 19:17:47.897381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.572 [2024-07-25 19:17:47.908859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190f20d8 00:26:55.572 [2024-07-25 19:17:47.909941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.572 [2024-07-25 19:17:47.909973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.572 [2024-07-25 19:17:47.921567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190f31b8 00:26:55.572 [2024-07-25 19:17:47.922622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.572 [2024-07-25 19:17:47.922653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.572 [2024-07-25 19:17:47.934110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190ebb98 00:26:55.572 [2024-07-25 19:17:47.935188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.572 [2024-07-25 19:17:47.935216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.572 [2024-07-25 19:17:47.946646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190eaab8 00:26:55.572 [2024-07-25 19:17:47.947718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.572 [2024-07-25 19:17:47.947748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.572 [2024-07-25 19:17:47.959193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e99d8 00:26:55.572 [2024-07-25 19:17:47.960330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.572 [2024-07-25 19:17:47.960359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.572 [2024-07-25 19:17:47.971774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e88f8 00:26:55.572 [2024-07-25 19:17:47.972818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.572 [2024-07-25 19:17:47.972849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.572 [2024-07-25 19:17:47.984224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e12d8 00:26:55.572 [2024-07-25 19:17:47.985358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.572 [2024-07-25 19:17:47.985387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.572 [2024-07-25 19:17:47.996891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e23b8 00:26:55.572 [2024-07-25 19:17:47.997936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.572 [2024-07-25 19:17:47.997967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.572 [2024-07-25 19:17:48.009424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e6738 00:26:55.572 [2024-07-25 19:17:48.010498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.572 [2024-07-25 19:17:48.010530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.572 [2024-07-25 19:17:48.021916] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e7818 00:26:55.572 [2024-07-25 19:17:48.022974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.572 [2024-07-25 19:17:48.023006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.572 [2024-07-25 19:17:48.034557] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fb8b8 00:26:55.572 [2024-07-25 19:17:48.035623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.572 [2024-07-25 19:17:48.035654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.829 [2024-07-25 19:17:48.047211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190df118 00:26:55.829 [2024-07-25 19:17:48.048316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.829 [2024-07-25 19:17:48.048347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.829 [2024-07-25 19:17:48.059813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190e01f8 00:26:55.829 [2024-07-25 19:17:48.060848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.829 [2024-07-25 19:17:48.060881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.829 [2024-07-25 19:17:48.072235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fef90 00:26:55.829 [2024-07-25 19:17:48.073314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.829 [2024-07-25 19:17:48.073346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.829 [2024-07-25 19:17:48.084747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1049f30) with pdu=0x2000190fda78 00:26:55.829 [2024-07-25 19:17:48.085815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.829 [2024-07-25 19:17:48.085847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.829 00:26:55.829 Latency(us) 00:26:55.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.829 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:55.829 nvme0n1 : 2.01 20978.04 81.95 0.00 0.00 6091.22 2973.39 12621.75 00:26:55.829 =================================================================================================================== 00:26:55.829 Total : 20978.04 81.95 0.00 0.00 6091.22 2973.39 12621.75 00:26:55.829 0 00:26:55.829 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:55.829 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:55.829 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:55.829 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:55.829 | .driver_specific 00:26:55.829 | .nvme_error 00:26:55.829 | .status_code 00:26:55.829 | .command_transient_transport_error' 00:26:56.087 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 165 > 0 )) 00:26:56.087 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1010436 00:26:56.087 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1010436 ']' 00:26:56.087 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1010436 00:26:56.087 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:56.088 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:56.088 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1010436 00:26:56.088 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:56.088 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:56.088 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1010436' 00:26:56.088 killing process with pid 1010436 00:26:56.088 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1010436 00:26:56.088 Received shutdown signal, test time was about 2.000000 seconds 00:26:56.088 00:26:56.088 Latency(us) 00:26:56.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.088 =================================================================================================================== 00:26:56.088 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:56.088 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1010436 00:26:56.346 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:56.346 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:56.346 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:56.346 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:56.346 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:56.346 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1010974 00:26:56.346 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1010974 /var/tmp/bperf.sock 00:26:56.346 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:56.346 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1010974 ']' 00:26:56.346 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:56.346 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:56.346 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:56.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:56.346 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:56.346 19:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:56.346 [2024-07-25 19:17:48.682769] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:56.346 [2024-07-25 19:17:48.682849] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010974 ] 00:26:56.346 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:56.346 Zero copy mechanism will not be used. 00:26:56.346 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.346 [2024-07-25 19:17:48.758396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.604 [2024-07-25 19:17:48.874925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.169 19:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:57.169 19:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:57.169 19:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:57.169 19:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:57.427 19:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:57.427 19:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.427 19:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:57.427 19:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.427 19:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:57.427 19:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:57.992 nvme0n1 00:26:57.992 19:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:57.992 19:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.992 19:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:57.992 19:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.992 19:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:57.992 19:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:58.251 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:58.251 Zero copy mechanism will not be used. 00:26:58.251 Running I/O for 2 seconds... 00:26:58.251 [2024-07-25 19:17:50.502020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.251 [2024-07-25 19:17:50.502468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.251 [2024-07-25 19:17:50.502507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.251 [2024-07-25 19:17:50.517963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.251 [2024-07-25 19:17:50.518371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.251 [2024-07-25 19:17:50.518416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.251 [2024-07-25 19:17:50.533387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.251 [2024-07-25 19:17:50.533819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.251 [2024-07-25 19:17:50.533862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.251 [2024-07-25 19:17:50.548349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.251 [2024-07-25 19:17:50.548525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.251 [2024-07-25 19:17:50.548554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.251 [2024-07-25 19:17:50.563709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.251 [2024-07-25 19:17:50.564132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.251 [2024-07-25 19:17:50.564178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.251 [2024-07-25 19:17:50.578545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.251 [2024-07-25 19:17:50.578947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.251 [2024-07-25 19:17:50.578979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.251 [2024-07-25 19:17:50.593524] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.251 [2024-07-25 19:17:50.593904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.251 [2024-07-25 19:17:50.593953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.251 [2024-07-25 19:17:50.608141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.251 [2024-07-25 19:17:50.608564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.251 [2024-07-25 19:17:50.608620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.251 [2024-07-25 19:17:50.622743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.251 [2024-07-25 19:17:50.623152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.251 [2024-07-25 19:17:50.623202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.251 [2024-07-25 19:17:50.638844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.251 [2024-07-25 19:17:50.639291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.251 [2024-07-25 19:17:50.639320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.251 [2024-07-25 19:17:50.654077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.251 [2024-07-25 19:17:50.654251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.251 [2024-07-25 19:17:50.654280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.251 [2024-07-25 19:17:50.668906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.251 [2024-07-25 19:17:50.669300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.251 [2024-07-25 19:17:50.669328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.251 [2024-07-25 19:17:50.683279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.251 [2024-07-25 19:17:50.683665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.251 [2024-07-25 19:17:50.683706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.251 [2024-07-25 19:17:50.698205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.251 [2024-07-25 19:17:50.698594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.251 [2024-07-25 19:17:50.698651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.251 [2024-07-25 19:17:50.713083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.251 [2024-07-25 19:17:50.713473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.251 [2024-07-25 19:17:50.713500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.510 [2024-07-25 19:17:50.728411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.510 [2024-07-25 19:17:50.728783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.510 [2024-07-25 19:17:50.728834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.510 [2024-07-25 19:17:50.743789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.510 [2024-07-25 19:17:50.743986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.510 [2024-07-25 19:17:50.744014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.510 [2024-07-25 19:17:50.758976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.510 [2024-07-25 19:17:50.759379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.510 [2024-07-25 19:17:50.759408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.510 [2024-07-25 19:17:50.774341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.510 [2024-07-25 19:17:50.774718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.510 [2024-07-25 19:17:50.774761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.510 [2024-07-25 19:17:50.789154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.510 [2024-07-25 19:17:50.789343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.510 [2024-07-25 19:17:50.789372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.510 [2024-07-25 19:17:50.803352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.510 [2024-07-25 19:17:50.803717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.510 [2024-07-25 19:17:50.803750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.510 [2024-07-25 19:17:50.818955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.510 [2024-07-25 19:17:50.819371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.510 [2024-07-25 19:17:50.819406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.510 [2024-07-25 19:17:50.834670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.510 [2024-07-25 19:17:50.835045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.510 [2024-07-25 19:17:50.835071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.510 [2024-07-25 19:17:50.849255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.510 [2024-07-25 19:17:50.849662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.510 [2024-07-25 19:17:50.849689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.510 [2024-07-25 19:17:50.864069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.510 [2024-07-25 19:17:50.864481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.510 [2024-07-25 19:17:50.864522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.510 [2024-07-25 19:17:50.879079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.510 [2024-07-25 19:17:50.879517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.510 [2024-07-25 19:17:50.879544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.510 [2024-07-25 19:17:50.893607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.510 [2024-07-25 19:17:50.893984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.510 [2024-07-25 19:17:50.894025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.510 [2024-07-25 19:17:50.908932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.510 [2024-07-25 19:17:50.909338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.510 [2024-07-25 19:17:50.909381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.510 [2024-07-25 19:17:50.923284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.510 [2024-07-25 19:17:50.923693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.510 [2024-07-25 19:17:50.923720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.510 [2024-07-25 19:17:50.937797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.510 [2024-07-25 19:17:50.938025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.510 [2024-07-25 19:17:50.938053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.510 [2024-07-25 19:17:50.952226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.510 [2024-07-25 19:17:50.952625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.510 [2024-07-25 19:17:50.952652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.510 [2024-07-25 19:17:50.967250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.510 [2024-07-25 19:17:50.967636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.510 [2024-07-25 19:17:50.967675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.768 [2024-07-25 19:17:50.982043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.768 [2024-07-25 19:17:50.982250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.768 [2024-07-25 19:17:50.982280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.768 [2024-07-25 19:17:50.997087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.768 [2024-07-25 19:17:50.997339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.768 [2024-07-25 19:17:50.997368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.768 [2024-07-25 19:17:51.011617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.768 [2024-07-25 19:17:51.012028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.768 [2024-07-25 19:17:51.012055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.768 [2024-07-25 19:17:51.026507] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.768 [2024-07-25 19:17:51.026919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.768 [2024-07-25 19:17:51.026947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.768 [2024-07-25 19:17:51.041460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.768 [2024-07-25 19:17:51.041853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.768 [2024-07-25 19:17:51.041885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.768 [2024-07-25 19:17:51.055927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.768 [2024-07-25 19:17:51.056124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.768 [2024-07-25 19:17:51.056162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.768 [2024-07-25 19:17:51.071475] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.768 [2024-07-25 19:17:51.071870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.768 [2024-07-25 19:17:51.071914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.768 [2024-07-25 19:17:51.086390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.768 [2024-07-25 19:17:51.086789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.768 [2024-07-25 19:17:51.086830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.768 [2024-07-25 19:17:51.101589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.768 [2024-07-25 19:17:51.101983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.768 [2024-07-25 19:17:51.102028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.768 [2024-07-25 19:17:51.116886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.768 [2024-07-25 19:17:51.117283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.768 [2024-07-25 19:17:51.117332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.768 [2024-07-25 19:17:51.133164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.769 [2024-07-25 19:17:51.133561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.769 [2024-07-25 19:17:51.133604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.769 [2024-07-25 19:17:51.148032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.769 [2024-07-25 19:17:51.148433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.769 [2024-07-25 19:17:51.148477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.769 [2024-07-25 19:17:51.162626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.769 [2024-07-25 19:17:51.163000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.769 [2024-07-25 19:17:51.163028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.769 [2024-07-25 19:17:51.178269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.769 [2024-07-25 19:17:51.178664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.769 [2024-07-25 19:17:51.178706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.769 [2024-07-25 19:17:51.193138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.769 [2024-07-25 19:17:51.193574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.769 [2024-07-25 19:17:51.193604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.769 [2024-07-25 19:17:51.208338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.769 [2024-07-25 19:17:51.208586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.769 [2024-07-25 19:17:51.208628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.769 [2024-07-25 19:17:51.223789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:58.769 [2024-07-25 19:17:51.224238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.769 [2024-07-25 19:17:51.224266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.027 [2024-07-25 19:17:51.239021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.027 [2024-07-25 19:17:51.239443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.027 [2024-07-25 19:17:51.239489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.027 [2024-07-25 19:17:51.254885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.027 [2024-07-25 19:17:51.255291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.027 [2024-07-25 19:17:51.255335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.027 [2024-07-25 19:17:51.269488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.027 [2024-07-25 19:17:51.269863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.027 [2024-07-25 19:17:51.269906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.027 [2024-07-25 19:17:51.285466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.027 [2024-07-25 19:17:51.285832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.027 [2024-07-25 19:17:51.285860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.027 [2024-07-25 19:17:51.300491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.027 [2024-07-25 19:17:51.300881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.027 [2024-07-25 19:17:51.300908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.027 [2024-07-25 19:17:51.316090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.027 [2024-07-25 19:17:51.316486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.027 [2024-07-25 19:17:51.316528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.027 [2024-07-25 19:17:51.331162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.027 [2024-07-25 19:17:51.331396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.027 [2024-07-25 19:17:51.331424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.027 [2024-07-25 19:17:51.345656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.027 [2024-07-25 19:17:51.346030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.027 [2024-07-25 19:17:51.346056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.028 [2024-07-25 19:17:51.361344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.028 [2024-07-25 19:17:51.361745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.028 [2024-07-25 19:17:51.361788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.028 [2024-07-25 19:17:51.376444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.028 [2024-07-25 19:17:51.376853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.028 [2024-07-25 19:17:51.376879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.028 [2024-07-25 19:17:51.390992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.028 [2024-07-25 19:17:51.391388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.028 [2024-07-25 19:17:51.391417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.028 [2024-07-25 19:17:51.406331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.028 [2024-07-25 19:17:51.406746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.028 [2024-07-25 19:17:51.406778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.028 [2024-07-25 19:17:51.421925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.028 [2024-07-25 19:17:51.422361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.028 [2024-07-25 19:17:51.422388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.028 [2024-07-25 19:17:51.437412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.028 [2024-07-25 19:17:51.437806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.028 [2024-07-25 19:17:51.437848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.028 [2024-07-25 19:17:51.453225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.028 [2024-07-25 19:17:51.453653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.028 [2024-07-25 19:17:51.453687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.028 [2024-07-25 19:17:51.468994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.028 [2024-07-25 19:17:51.469397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.028 [2024-07-25 19:17:51.469425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.028 [2024-07-25 19:17:51.484317] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.028 [2024-07-25 19:17:51.484683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.028 [2024-07-25 19:17:51.484726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.286 [2024-07-25 19:17:51.500005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.286 [2024-07-25 19:17:51.500426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.286 [2024-07-25 19:17:51.500457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.286 [2024-07-25 19:17:51.515215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.286 [2024-07-25 19:17:51.515618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.286 [2024-07-25 19:17:51.515646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.286 [2024-07-25 19:17:51.530550] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.286 [2024-07-25 19:17:51.530929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.286 [2024-07-25 19:17:51.530970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.286 [2024-07-25 19:17:51.546412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.286 [2024-07-25 19:17:51.546786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.286 [2024-07-25 19:17:51.546828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.286 [2024-07-25 19:17:51.561189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.286 [2024-07-25 19:17:51.561576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.286 [2024-07-25 19:17:51.561618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.286 [2024-07-25 19:17:51.576100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.286 [2024-07-25 19:17:51.576527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.286 [2024-07-25 19:17:51.576553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.286 [2024-07-25 19:17:51.592002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.286 [2024-07-25 19:17:51.592419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.286 [2024-07-25 19:17:51.592447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.286 [2024-07-25 19:17:51.606547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.286 [2024-07-25 19:17:51.606980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.286 [2024-07-25 19:17:51.607025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.286 [2024-07-25 19:17:51.622784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.286 [2024-07-25 19:17:51.623169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.286 [2024-07-25 19:17:51.623212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.286 [2024-07-25 19:17:51.638127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.286 [2024-07-25 19:17:51.638545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.286 [2024-07-25 19:17:51.638571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.286 [2024-07-25 19:17:51.653582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.286 [2024-07-25 19:17:51.653970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.286 [2024-07-25 19:17:51.653997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.286 [2024-07-25 19:17:51.668768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.286 [2024-07-25 19:17:51.669167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.286 [2024-07-25 19:17:51.669209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.286 [2024-07-25 19:17:51.684437] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.286 [2024-07-25 19:17:51.684830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.286 [2024-07-25 19:17:51.684856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.286 [2024-07-25 19:17:51.699168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.286 [2024-07-25 19:17:51.699565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.286 [2024-07-25 19:17:51.699608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.286 [2024-07-25 19:17:51.714904] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.286 [2024-07-25 19:17:51.715308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.287 [2024-07-25 19:17:51.715350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.287 [2024-07-25 19:17:51.729751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.287 [2024-07-25 19:17:51.730100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.287 [2024-07-25 19:17:51.730149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.287 [2024-07-25 19:17:51.744570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.287 [2024-07-25 19:17:51.744944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.287 [2024-07-25 19:17:51.744986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.545 [2024-07-25 19:17:51.759570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.545 [2024-07-25 19:17:51.760004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.545 [2024-07-25 19:17:51.760034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.545 [2024-07-25 19:17:51.774786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.545 [2024-07-25 19:17:51.775200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.545 [2024-07-25 19:17:51.775249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.545 [2024-07-25 19:17:51.790729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.545 [2024-07-25 19:17:51.791181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.545 [2024-07-25 19:17:51.791209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.545 [2024-07-25 19:17:51.806381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.545 [2024-07-25 19:17:51.806755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.545 [2024-07-25 19:17:51.806796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.545 [2024-07-25 19:17:51.821880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.545 [2024-07-25 19:17:51.822162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.545 [2024-07-25 19:17:51.822191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.545 [2024-07-25 19:17:51.836686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.545 [2024-07-25 19:17:51.837061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.545 [2024-07-25 19:17:51.837116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.545 [2024-07-25 19:17:51.851523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.545 [2024-07-25 19:17:51.851911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.545 [2024-07-25 19:17:51.851958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.545 [2024-07-25 19:17:51.866187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.545 [2024-07-25 19:17:51.866378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.545 [2024-07-25 19:17:51.866406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.545 [2024-07-25 19:17:51.881197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.545 [2024-07-25 19:17:51.881585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.545 [2024-07-25 19:17:51.881612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.545 [2024-07-25 19:17:51.896379] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.546 [2024-07-25 19:17:51.896746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.546 [2024-07-25 19:17:51.896774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.546 [2024-07-25 19:17:51.912044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.546 [2024-07-25 19:17:51.912444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.546 [2024-07-25 19:17:51.912473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.546 [2024-07-25 19:17:51.926852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.546 [2024-07-25 19:17:51.927252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.546 [2024-07-25 19:17:51.927296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.546 [2024-07-25 19:17:51.941308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.546 [2024-07-25 19:17:51.941721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.546 [2024-07-25 19:17:51.941763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.546 [2024-07-25 19:17:51.956427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.546 [2024-07-25 19:17:51.956887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.546 [2024-07-25 19:17:51.956930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.546 [2024-07-25 19:17:51.971630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.546 [2024-07-25 19:17:51.971842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.546 [2024-07-25 19:17:51.971870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.546 [2024-07-25 19:17:51.986632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.546 [2024-07-25 19:17:51.986984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.546 [2024-07-25 19:17:51.987026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.546 [2024-07-25 19:17:52.001333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.546 [2024-07-25 19:17:52.001603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.546 [2024-07-25 19:17:52.001631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.805 [2024-07-25 19:17:52.015929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.805 [2024-07-25 19:17:52.016322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.805 [2024-07-25 19:17:52.016364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.805 [2024-07-25 19:17:52.030978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.805 [2024-07-25 19:17:52.031352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.805 [2024-07-25 19:17:52.031382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.805 [2024-07-25 19:17:52.045608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.805 [2024-07-25 19:17:52.045996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.805 [2024-07-25 19:17:52.046023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.805 [2024-07-25 19:17:52.060910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.805 [2024-07-25 19:17:52.061304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.805 [2024-07-25 19:17:52.061348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.805 [2024-07-25 19:17:52.075098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.805 [2024-07-25 19:17:52.075474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.805 [2024-07-25 19:17:52.075503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.805 [2024-07-25 19:17:52.090513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.805 [2024-07-25 19:17:52.090899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.805 [2024-07-25 19:17:52.090943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.805 [2024-07-25 19:17:52.105017] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.805 [2024-07-25 19:17:52.105393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.805 [2024-07-25 19:17:52.105421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.805 [2024-07-25 19:17:52.119407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.805 [2024-07-25 19:17:52.119796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.805 [2024-07-25 19:17:52.119839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.805 [2024-07-25 19:17:52.133625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.805 [2024-07-25 19:17:52.134026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.805 [2024-07-25 19:17:52.134054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.805 [2024-07-25 19:17:52.148053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.805 [2024-07-25 19:17:52.148436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.805 [2024-07-25 19:17:52.148464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.805 [2024-07-25 19:17:52.162319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.805 [2024-07-25 19:17:52.162708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.805 [2024-07-25 19:17:52.162750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.805 [2024-07-25 19:17:52.176590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.805 [2024-07-25 19:17:52.176838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.805 [2024-07-25 19:17:52.176866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.805 [2024-07-25 19:17:52.190973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.805 [2024-07-25 19:17:52.191350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.805 [2024-07-25 19:17:52.191378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.805 [2024-07-25 19:17:52.206059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.805 [2024-07-25 19:17:52.206475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.805 [2024-07-25 19:17:52.206503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.805 [2024-07-25 19:17:52.220320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.805 [2024-07-25 19:17:52.220697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.805 [2024-07-25 19:17:52.220725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.805 [2024-07-25 19:17:52.235287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.805 [2024-07-25 19:17:52.235661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.805 [2024-07-25 19:17:52.235696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.805 [2024-07-25 19:17:52.249897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.805 [2024-07-25 19:17:52.250294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.805 [2024-07-25 19:17:52.250338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.805 [2024-07-25 19:17:52.264934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:26:59.805 [2024-07-25 19:17:52.265315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.805 [2024-07-25 19:17:52.265357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.064 [2024-07-25 19:17:52.279685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:27:00.064 [2024-07-25 19:17:52.279961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.064 [2024-07-25 19:17:52.279992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.064 [2024-07-25 19:17:52.294385] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:27:00.064 [2024-07-25 19:17:52.294791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.064 [2024-07-25 19:17:52.294820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.064 [2024-07-25 19:17:52.309177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:27:00.064 [2024-07-25 19:17:52.309553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.064 [2024-07-25 19:17:52.309580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.064 [2024-07-25 19:17:52.323890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:27:00.064 [2024-07-25 19:17:52.324259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.064 [2024-07-25 19:17:52.324288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.064 [2024-07-25 19:17:52.339168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:27:00.064 [2024-07-25 19:17:52.339573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.064 [2024-07-25 19:17:52.339600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.064 [2024-07-25 19:17:52.353821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:27:00.064 [2024-07-25 19:17:52.354204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.064 [2024-07-25 19:17:52.354233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.064 [2024-07-25 19:17:52.368043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:27:00.064 [2024-07-25 19:17:52.368453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.064 [2024-07-25 19:17:52.368480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.064 [2024-07-25 19:17:52.383161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:27:00.064 [2024-07-25 19:17:52.383538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.064 [2024-07-25 19:17:52.383566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.064 [2024-07-25 19:17:52.398403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:27:00.064 [2024-07-25 19:17:52.398815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.064 [2024-07-25 19:17:52.398842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.064 [2024-07-25 19:17:52.413759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:27:00.064 [2024-07-25 19:17:52.414155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.064 [2024-07-25 19:17:52.414198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.064 [2024-07-25 19:17:52.429300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:27:00.064 [2024-07-25 19:17:52.429689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.064 [2024-07-25 19:17:52.429717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.064 [2024-07-25 19:17:52.443484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:27:00.064 [2024-07-25 19:17:52.443771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.064 [2024-07-25 19:17:52.443799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.064 [2024-07-25 19:17:52.458516] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:27:00.064 [2024-07-25 19:17:52.458891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.064 [2024-07-25 19:17:52.458919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.064 [2024-07-25 19:17:52.473694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104a270) with pdu=0x2000190fef90 00:27:00.064 [2024-07-25 19:17:52.474053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.064 [2024-07-25 19:17:52.474099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.064 00:27:00.064 Latency(us) 00:27:00.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.064 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:00.064 nvme0n1 : 2.01 2053.19 256.65 0.00 0.00 7773.21 6650.69 18738.44 00:27:00.064 =================================================================================================================== 00:27:00.064 Total : 2053.19 256.65 0.00 0.00 7773.21 6650.69 18738.44 00:27:00.064 0 00:27:00.064 19:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:00.064 19:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:00.064 19:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:00.064 | .driver_specific 00:27:00.064 | .nvme_error 00:27:00.064 | .status_code 00:27:00.064 | .command_transient_transport_error' 00:27:00.065 19:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:00.322 19:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 132 > 0 )) 00:27:00.322 19:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1010974 00:27:00.322 19:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1010974 ']' 00:27:00.322 19:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1010974 00:27:00.322 19:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:00.322 19:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:00.322 19:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1010974 00:27:00.322 19:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:00.322 19:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:00.322 19:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1010974' 00:27:00.322 killing process with pid 1010974 00:27:00.322 19:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1010974 00:27:00.322 Received shutdown signal, test time was about 2.000000 seconds 00:27:00.322 00:27:00.322 Latency(us) 00:27:00.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.322 =================================================================================================================== 00:27:00.322 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:00.322 19:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1010974 00:27:00.888 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1009479 00:27:00.888 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1009479 ']' 00:27:00.888 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1009479 00:27:00.888 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:00.888 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:00.888 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1009479 00:27:00.888 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:00.888 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:00.888 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1009479' 00:27:00.888 killing process with pid 1009479 00:27:00.888 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1009479 00:27:00.888 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1009479 00:27:01.148 00:27:01.148 real 0m16.914s 00:27:01.148 user 0m33.523s 00:27:01.148 sys 0m4.090s 00:27:01.148 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:01.148 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:01.148 ************************************ 00:27:01.148 END TEST nvmf_digest_error 00:27:01.148 ************************************ 00:27:01.148 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:01.148 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:01.148 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:01.148 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:27:01.148 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:01.148 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:27:01.148 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:01.148 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:01.148 rmmod nvme_tcp 00:27:01.148 rmmod nvme_fabrics 00:27:01.148 rmmod nvme_keyring 00:27:01.148 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:01.148 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:27:01.148 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:27:01.148 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1009479 ']' 00:27:01.148 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1009479 00:27:01.148 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1009479 ']' 00:27:01.148 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1009479 00:27:01.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1009479) - No such process 00:27:01.148 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1009479 is not found' 00:27:01.148 Process with pid 1009479 is not found 00:27:01.148 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:01.148 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:01.148 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:01.148 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:01.148 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:01.148 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.148 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:01.148 19:17:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.136 19:17:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:03.136 00:27:03.136 real 0m37.323s 00:27:03.136 user 1m5.660s 00:27:03.136 sys 0m9.881s 00:27:03.136 19:17:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:03.136 19:17:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:03.136 ************************************ 00:27:03.136 END TEST nvmf_digest 00:27:03.136 ************************************ 00:27:03.136 19:17:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:03.136 19:17:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:03.136 19:17:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:03.136 19:17:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:03.136 19:17:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:03.136 19:17:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:03.136 19:17:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.136 ************************************ 00:27:03.136 START TEST nvmf_bdevperf 00:27:03.136 ************************************ 00:27:03.136 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:03.136 * Looking for test storage... 00:27:03.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:03.395 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:03.396 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.396 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:03.396 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.396 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:03.396 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:03.396 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:03.396 19:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:05.929 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.929 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:05.929 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:05.929 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:05.929 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:05.929 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:05.929 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:05.929 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:27:05.929 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:05.929 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:27:05.929 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:27:05.929 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:27:05.929 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:27:05.929 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:27:05.929 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:05.929 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.929 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.929 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:05.930 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:05.930 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:05.930 Found net devices under 0000:09:00.0: cvl_0_0 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:05.930 Found net devices under 0000:09:00.1: cvl_0_1 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:05.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:05.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:27:05.930 00:27:05.930 --- 10.0.0.2 ping statistics --- 00:27:05.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.930 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:05.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:05.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:27:05.930 00:27:05.930 --- 10.0.0.1 ping statistics --- 00:27:05.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.930 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1013753 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1013753 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1013753 ']' 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:05.930 19:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:05.930 [2024-07-25 19:17:58.298082] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:05.930 [2024-07-25 19:17:58.298199] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.930 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.930 [2024-07-25 19:17:58.371685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:06.189 [2024-07-25 19:17:58.479692] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:06.189 [2024-07-25 19:17:58.479737] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:06.189 [2024-07-25 19:17:58.479763] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:06.189 [2024-07-25 19:17:58.479782] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:06.189 [2024-07-25 19:17:58.479797] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:06.189 [2024-07-25 19:17:58.479892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:06.189 [2024-07-25 19:17:58.479963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:06.189 [2024-07-25 19:17:58.479969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.123 [2024-07-25 19:17:59.301821] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.123 Malloc0 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.123 [2024-07-25 19:17:59.373394] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:07.123 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:27:07.124 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:27:07.124 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.124 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.124 { 00:27:07.124 "params": { 00:27:07.124 "name": "Nvme$subsystem", 00:27:07.124 "trtype": "$TEST_TRANSPORT", 00:27:07.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.124 "adrfam": "ipv4", 00:27:07.124 "trsvcid": "$NVMF_PORT", 00:27:07.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.124 "hdgst": ${hdgst:-false}, 00:27:07.124 "ddgst": ${ddgst:-false} 00:27:07.124 }, 00:27:07.124 "method": "bdev_nvme_attach_controller" 00:27:07.124 } 00:27:07.124 EOF 00:27:07.124 )") 00:27:07.124 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:27:07.124 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:27:07.124 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:27:07.124 19:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:07.124 "params": { 00:27:07.124 "name": "Nvme1", 00:27:07.124 "trtype": "tcp", 00:27:07.124 "traddr": "10.0.0.2", 00:27:07.124 "adrfam": "ipv4", 00:27:07.124 "trsvcid": "4420", 00:27:07.124 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:07.124 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:07.124 "hdgst": false, 00:27:07.124 "ddgst": false 00:27:07.124 }, 00:27:07.124 "method": "bdev_nvme_attach_controller" 00:27:07.124 }' 00:27:07.124 [2024-07-25 19:17:59.422417] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:07.124 [2024-07-25 19:17:59.422500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1013904 ] 00:27:07.124 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.124 [2024-07-25 19:17:59.490820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.382 [2024-07-25 19:17:59.604802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.639 Running I/O for 1 seconds... 00:27:08.572 00:27:08.572 Latency(us) 00:27:08.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:08.572 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:08.572 Verification LBA range: start 0x0 length 0x4000 00:27:08.572 Nvme1n1 : 1.01 8746.43 34.17 0.00 0.00 14568.69 2961.26 15243.19 00:27:08.572 =================================================================================================================== 00:27:08.572 Total : 8746.43 34.17 0.00 0.00 14568.69 2961.26 15243.19 00:27:08.830 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1014233 00:27:08.830 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:08.830 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:08.830 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:08.830 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:27:08.830 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:27:08.830 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.830 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.830 { 00:27:08.830 "params": { 00:27:08.830 "name": "Nvme$subsystem", 00:27:08.830 "trtype": "$TEST_TRANSPORT", 00:27:08.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.830 "adrfam": "ipv4", 00:27:08.830 "trsvcid": "$NVMF_PORT", 00:27:08.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.830 "hdgst": ${hdgst:-false}, 00:27:08.830 "ddgst": ${ddgst:-false} 00:27:08.830 }, 00:27:08.830 "method": "bdev_nvme_attach_controller" 00:27:08.830 } 00:27:08.830 EOF 00:27:08.830 )") 00:27:08.830 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:27:08.830 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:27:08.830 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:27:08.831 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:08.831 "params": { 00:27:08.831 "name": "Nvme1", 00:27:08.831 "trtype": "tcp", 00:27:08.831 "traddr": "10.0.0.2", 00:27:08.831 "adrfam": "ipv4", 00:27:08.831 "trsvcid": "4420", 00:27:08.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:08.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:08.831 "hdgst": false, 00:27:08.831 "ddgst": false 00:27:08.831 }, 00:27:08.831 "method": "bdev_nvme_attach_controller" 00:27:08.831 }' 00:27:08.831 [2024-07-25 19:18:01.197007] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:08.831 [2024-07-25 19:18:01.197114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1014233 ] 00:27:08.831 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.831 [2024-07-25 19:18:01.266694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.088 [2024-07-25 19:18:01.375701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.345 Running I/O for 15 seconds... 00:27:11.874 19:18:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1013753 00:27:11.874 19:18:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:11.874 [2024-07-25 19:18:04.168611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.874 [2024-07-25 19:18:04.168662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.874 [2024-07-25 19:18:04.168699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.874 [2024-07-25 19:18:04.168720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.874 [2024-07-25 19:18:04.168739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.874 [2024-07-25 19:18:04.168755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.874 [2024-07-25 19:18:04.168774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.874 [2024-07-25 19:18:04.168790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.874 [2024-07-25 19:18:04.168808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.874 [2024-07-25 19:18:04.168825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.874 [2024-07-25 19:18:04.168843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.874 [2024-07-25 19:18:04.168859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.874 [2024-07-25 19:18:04.168876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.874 [2024-07-25 19:18:04.168892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.874 [2024-07-25 19:18:04.168910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.874 [2024-07-25 19:18:04.168926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.874 [2024-07-25 19:18:04.168944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.874 [2024-07-25 19:18:04.168962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.874 [2024-07-25 19:18:04.168982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-07-25 19:18:04.169007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.874 [2024-07-25 19:18:04.169029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:37128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:37136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:37144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:37176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:37216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:37256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:37264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:37288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.169970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.169988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.170003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.170020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.170035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.170052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.170067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.170085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.170116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.170134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.170165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.170182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.170196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.170212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.170226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.170241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.170255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.170271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.170285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.170300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.170314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.170333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.170348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.875 [2024-07-25 19:18:04.170364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-07-25 19:18:04.170399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.170425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.170441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.170459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.170474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.170492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.170507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.170524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.170540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.170563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.170578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.170597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.170612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.170630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.170645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.170662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:37480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.170677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.170694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.170709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.170725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:37496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.170741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.170757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.170776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.170794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.170809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.170826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.170841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.170858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.170873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.170890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.170905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.170921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.170936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.170953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.170968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.170985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.171000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.171017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.171032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.171049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.171064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.171081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.171097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.171123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.171139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.171174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.171189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.171204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.171221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.171237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.171251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.171266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.171280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.171295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:37632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.171309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.171325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.171339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.171354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.171368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.171383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.171412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.171427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:37664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.171439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.171478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.171494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.171511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.171526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.171544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.171560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.171576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:37696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.171591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.171608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.171623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.171645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.171660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.171677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.171692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.876 [2024-07-25 19:18:04.171709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-07-25 19:18:04.171724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.171740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-07-25 19:18:04.171755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.171771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-07-25 19:18:04.171797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.171814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.877 [2024-07-25 19:18:04.171829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.171846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.877 [2024-07-25 19:18:04.171861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.171878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.877 [2024-07-25 19:18:04.171893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.171909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.877 [2024-07-25 19:18:04.171924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.171941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.877 [2024-07-25 19:18:04.171957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.171974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.877 [2024-07-25 19:18:04.171991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-07-25 19:18:04.172023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-07-25 19:18:04.172059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-07-25 19:18:04.172098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-07-25 19:18:04.172156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-07-25 19:18:04.172187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-07-25 19:18:04.172216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-07-25 19:18:04.172244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-07-25 19:18:04.172273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.877 [2024-07-25 19:18:04.172301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.877 [2024-07-25 19:18:04.172330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.877 [2024-07-25 19:18:04.172359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.877 [2024-07-25 19:18:04.172403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.877 [2024-07-25 19:18:04.172433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.877 [2024-07-25 19:18:04.172478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.877 [2024-07-25 19:18:04.172516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.877 [2024-07-25 19:18:04.172548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-07-25 19:18:04.172581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-07-25 19:18:04.172613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-07-25 19:18:04.172646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-07-25 19:18:04.172678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-07-25 19:18:04.172710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-07-25 19:18:04.172742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-07-25 19:18:04.172776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-07-25 19:18:04.172808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-07-25 19:18:04.172840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-07-25 19:18:04.172872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-07-25 19:18:04.172908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-07-25 19:18:04.172948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-07-25 19:18:04.172979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.172996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:37920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-07-25 19:18:04.173011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.877 [2024-07-25 19:18:04.173028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.878 [2024-07-25 19:18:04.173043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.878 [2024-07-25 19:18:04.173060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4830 is same with the state(5) to be set 00:27:11.878 [2024-07-25 19:18:04.173078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:11.878 [2024-07-25 19:18:04.173090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:11.878 [2024-07-25 19:18:04.173111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37936 len:8 PRP1 0x0 PRP2 0x0 00:27:11.878 [2024-07-25 19:18:04.173152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.878 [2024-07-25 19:18:04.173217] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ac4830 was disconnected and freed. reset controller. 00:27:11.878 [2024-07-25 19:18:04.173287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.878 [2024-07-25 19:18:04.173309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.878 [2024-07-25 19:18:04.173325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.878 [2024-07-25 19:18:04.173442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.878 [2024-07-25 19:18:04.173471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.878 [2024-07-25 19:18:04.173486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.878 [2024-07-25 19:18:04.173504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.878 [2024-07-25 19:18:04.173518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.878 [2024-07-25 19:18:04.173532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:11.878 [2024-07-25 19:18:04.177279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:11.878 [2024-07-25 19:18:04.177316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:11.878 [2024-07-25 19:18:04.178050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.878 [2024-07-25 19:18:04.178099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:11.878 [2024-07-25 19:18:04.178127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:11.878 [2024-07-25 19:18:04.178360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:11.878 [2024-07-25 19:18:04.178629] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:11.878 [2024-07-25 19:18:04.178653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:11.878 [2024-07-25 19:18:04.178670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:11.878 [2024-07-25 19:18:04.182476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:11.878 [2024-07-25 19:18:04.191588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:11.878 [2024-07-25 19:18:04.192108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.878 [2024-07-25 19:18:04.192141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:11.878 [2024-07-25 19:18:04.192159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:11.878 [2024-07-25 19:18:04.192399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:11.878 [2024-07-25 19:18:04.192643] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:11.878 [2024-07-25 19:18:04.192665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:11.878 [2024-07-25 19:18:04.192681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:11.878 [2024-07-25 19:18:04.196266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:11.878 [2024-07-25 19:18:04.205562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:11.878 [2024-07-25 19:18:04.206030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.878 [2024-07-25 19:18:04.206062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:11.878 [2024-07-25 19:18:04.206079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:11.878 [2024-07-25 19:18:04.206333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:11.878 [2024-07-25 19:18:04.206577] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:11.878 [2024-07-25 19:18:04.206600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:11.878 [2024-07-25 19:18:04.206615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:11.878 [2024-07-25 19:18:04.210196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:11.878 [2024-07-25 19:18:04.219496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:11.878 [2024-07-25 19:18:04.219999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.878 [2024-07-25 19:18:04.220048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:11.878 [2024-07-25 19:18:04.220066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:11.878 [2024-07-25 19:18:04.220316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:11.878 [2024-07-25 19:18:04.220565] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:11.878 [2024-07-25 19:18:04.220588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:11.878 [2024-07-25 19:18:04.220603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:11.878 [2024-07-25 19:18:04.224185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:11.878 [2024-07-25 19:18:04.233470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:11.878 [2024-07-25 19:18:04.233925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.878 [2024-07-25 19:18:04.233955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:11.878 [2024-07-25 19:18:04.233973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:11.878 [2024-07-25 19:18:04.234224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:11.878 [2024-07-25 19:18:04.234467] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:11.878 [2024-07-25 19:18:04.234490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:11.878 [2024-07-25 19:18:04.234505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:11.878 [2024-07-25 19:18:04.238080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:11.878 [2024-07-25 19:18:04.247403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:11.878 [2024-07-25 19:18:04.247838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.878 [2024-07-25 19:18:04.247869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:11.878 [2024-07-25 19:18:04.247887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:11.878 [2024-07-25 19:18:04.248139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:11.878 [2024-07-25 19:18:04.248382] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:11.878 [2024-07-25 19:18:04.248405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:11.879 [2024-07-25 19:18:04.248420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:11.879 [2024-07-25 19:18:04.251995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:11.879 [2024-07-25 19:18:04.261289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:11.879 [2024-07-25 19:18:04.261759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.879 [2024-07-25 19:18:04.261789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:11.879 [2024-07-25 19:18:04.261807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:11.879 [2024-07-25 19:18:04.262045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:11.879 [2024-07-25 19:18:04.262300] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:11.879 [2024-07-25 19:18:04.262323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:11.879 [2024-07-25 19:18:04.262338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:11.879 [2024-07-25 19:18:04.265920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:11.879 [2024-07-25 19:18:04.275214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:11.879 [2024-07-25 19:18:04.275676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.879 [2024-07-25 19:18:04.275707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:11.879 [2024-07-25 19:18:04.275724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:11.879 [2024-07-25 19:18:04.275963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:11.879 [2024-07-25 19:18:04.276218] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:11.879 [2024-07-25 19:18:04.276242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:11.879 [2024-07-25 19:18:04.276258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:11.879 [2024-07-25 19:18:04.279842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:11.879 [2024-07-25 19:18:04.289151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:11.879 [2024-07-25 19:18:04.289582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.879 [2024-07-25 19:18:04.289613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:11.879 [2024-07-25 19:18:04.289630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:11.879 [2024-07-25 19:18:04.289869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:11.879 [2024-07-25 19:18:04.290123] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:11.879 [2024-07-25 19:18:04.290146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:11.879 [2024-07-25 19:18:04.290162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:11.879 [2024-07-25 19:18:04.293735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:11.879 [2024-07-25 19:18:04.303013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:11.879 [2024-07-25 19:18:04.303424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.879 [2024-07-25 19:18:04.303456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:11.879 [2024-07-25 19:18:04.303474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:11.879 [2024-07-25 19:18:04.303712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:11.879 [2024-07-25 19:18:04.303955] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:11.879 [2024-07-25 19:18:04.303977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:11.879 [2024-07-25 19:18:04.303993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:11.879 [2024-07-25 19:18:04.307578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:11.879 [2024-07-25 19:18:04.316855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:11.879 [2024-07-25 19:18:04.317324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.879 [2024-07-25 19:18:04.317355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:11.879 [2024-07-25 19:18:04.317378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:11.879 [2024-07-25 19:18:04.317618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:11.879 [2024-07-25 19:18:04.317860] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:11.879 [2024-07-25 19:18:04.317883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:11.879 [2024-07-25 19:18:04.317898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:11.879 [2024-07-25 19:18:04.321484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:11.879 [2024-07-25 19:18:04.330769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:11.879 [2024-07-25 19:18:04.331273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.879 [2024-07-25 19:18:04.331301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:11.879 [2024-07-25 19:18:04.331316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:11.879 [2024-07-25 19:18:04.331565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:11.879 [2024-07-25 19:18:04.331807] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:11.879 [2024-07-25 19:18:04.331830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:11.879 [2024-07-25 19:18:04.331845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:11.879 [2024-07-25 19:18:04.335429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.138 [2024-07-25 19:18:04.344540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.138 [2024-07-25 19:18:04.344957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-25 19:18:04.344984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.139 [2024-07-25 19:18:04.344999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.139 [2024-07-25 19:18:04.345261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.139 [2024-07-25 19:18:04.345495] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.139 [2024-07-25 19:18:04.345528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.139 [2024-07-25 19:18:04.345543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.139 [2024-07-25 19:18:04.349092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.139 [2024-07-25 19:18:04.358488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.139 [2024-07-25 19:18:04.358936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-25 19:18:04.358967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.139 [2024-07-25 19:18:04.358985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.139 [2024-07-25 19:18:04.359231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.139 [2024-07-25 19:18:04.359459] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.139 [2024-07-25 19:18:04.359488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.139 [2024-07-25 19:18:04.359504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.139 [2024-07-25 19:18:04.363045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.139 [2024-07-25 19:18:04.372504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.139 [2024-07-25 19:18:04.372997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-25 19:18:04.373052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.139 [2024-07-25 19:18:04.373070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.139 [2024-07-25 19:18:04.373314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.139 [2024-07-25 19:18:04.373572] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.139 [2024-07-25 19:18:04.373593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.139 [2024-07-25 19:18:04.373606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.139 [2024-07-25 19:18:04.377234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.139 [2024-07-25 19:18:04.386364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.139 [2024-07-25 19:18:04.386861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-25 19:18:04.386908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.139 [2024-07-25 19:18:04.386926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.139 [2024-07-25 19:18:04.387187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.139 [2024-07-25 19:18:04.387410] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.139 [2024-07-25 19:18:04.387434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.139 [2024-07-25 19:18:04.387449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.139 [2024-07-25 19:18:04.391012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.139 [2024-07-25 19:18:04.400206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.139 [2024-07-25 19:18:04.400634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-25 19:18:04.400664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.139 [2024-07-25 19:18:04.400682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.139 [2024-07-25 19:18:04.400920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.139 [2024-07-25 19:18:04.401190] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.139 [2024-07-25 19:18:04.401210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.139 [2024-07-25 19:18:04.401222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.139 [2024-07-25 19:18:04.404729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.139 [2024-07-25 19:18:04.413973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.139 [2024-07-25 19:18:04.414469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-25 19:18:04.414496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.139 [2024-07-25 19:18:04.414512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.139 [2024-07-25 19:18:04.414755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.139 [2024-07-25 19:18:04.414989] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.139 [2024-07-25 19:18:04.415009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.139 [2024-07-25 19:18:04.415022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.139 [2024-07-25 19:18:04.418581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.139 [2024-07-25 19:18:04.427911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.139 [2024-07-25 19:18:04.428377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-25 19:18:04.428411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.139 [2024-07-25 19:18:04.428427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.139 [2024-07-25 19:18:04.428698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.139 [2024-07-25 19:18:04.428940] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.139 [2024-07-25 19:18:04.428961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.139 [2024-07-25 19:18:04.428974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.139 [2024-07-25 19:18:04.432593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.139 [2024-07-25 19:18:04.441841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.139 [2024-07-25 19:18:04.442259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-25 19:18:04.442287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.139 [2024-07-25 19:18:04.442303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.139 [2024-07-25 19:18:04.442531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.139 [2024-07-25 19:18:04.442768] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.139 [2024-07-25 19:18:04.442792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.139 [2024-07-25 19:18:04.442807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.139 [2024-07-25 19:18:04.446394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.139 [2024-07-25 19:18:04.455800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.139 [2024-07-25 19:18:04.456241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-25 19:18:04.456269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.139 [2024-07-25 19:18:04.456285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.139 [2024-07-25 19:18:04.456552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.139 [2024-07-25 19:18:04.456796] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.139 [2024-07-25 19:18:04.456819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.139 [2024-07-25 19:18:04.456834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.139 [2024-07-25 19:18:04.460478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.139 [2024-07-25 19:18:04.469841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.139 [2024-07-25 19:18:04.470257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-25 19:18:04.470286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.139 [2024-07-25 19:18:04.470302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.139 [2024-07-25 19:18:04.470555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.139 [2024-07-25 19:18:04.470798] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.139 [2024-07-25 19:18:04.470821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.139 [2024-07-25 19:18:04.470836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.139 [2024-07-25 19:18:04.474435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.139 [2024-07-25 19:18:04.483712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.139 [2024-07-25 19:18:04.484175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-25 19:18:04.484209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.140 [2024-07-25 19:18:04.484229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.140 [2024-07-25 19:18:04.484486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.140 [2024-07-25 19:18:04.484729] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.140 [2024-07-25 19:18:04.484751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.140 [2024-07-25 19:18:04.484766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.140 [2024-07-25 19:18:04.488342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.140 [2024-07-25 19:18:04.497612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.140 [2024-07-25 19:18:04.498076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-25 19:18:04.498124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.140 [2024-07-25 19:18:04.498143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.140 [2024-07-25 19:18:04.498387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.140 [2024-07-25 19:18:04.498642] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.140 [2024-07-25 19:18:04.498664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.140 [2024-07-25 19:18:04.498686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.140 [2024-07-25 19:18:04.502269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.140 [2024-07-25 19:18:04.511560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.140 [2024-07-25 19:18:04.511993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-25 19:18:04.512023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.140 [2024-07-25 19:18:04.512041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.140 [2024-07-25 19:18:04.512292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.140 [2024-07-25 19:18:04.512535] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.140 [2024-07-25 19:18:04.512558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.140 [2024-07-25 19:18:04.512573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.140 [2024-07-25 19:18:04.516158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.140 [2024-07-25 19:18:04.525453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.140 [2024-07-25 19:18:04.525907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-25 19:18:04.525938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.140 [2024-07-25 19:18:04.525955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.140 [2024-07-25 19:18:04.526206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.140 [2024-07-25 19:18:04.526449] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.140 [2024-07-25 19:18:04.526472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.140 [2024-07-25 19:18:04.526487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.140 [2024-07-25 19:18:04.530064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.140 [2024-07-25 19:18:04.539373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.140 [2024-07-25 19:18:04.539980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-25 19:18:04.540048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.140 [2024-07-25 19:18:04.540065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.140 [2024-07-25 19:18:04.540311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.140 [2024-07-25 19:18:04.540554] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.140 [2024-07-25 19:18:04.540576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.140 [2024-07-25 19:18:04.540591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.140 [2024-07-25 19:18:04.544177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.140 [2024-07-25 19:18:04.553263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.140 [2024-07-25 19:18:04.553749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-25 19:18:04.553776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.140 [2024-07-25 19:18:04.553806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.140 [2024-07-25 19:18:04.554066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.140 [2024-07-25 19:18:04.554321] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.140 [2024-07-25 19:18:04.554345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.140 [2024-07-25 19:18:04.554360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.140 [2024-07-25 19:18:04.557934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.140 [2024-07-25 19:18:04.567248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.140 [2024-07-25 19:18:04.567702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-25 19:18:04.567732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.140 [2024-07-25 19:18:04.567750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.140 [2024-07-25 19:18:04.567989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.140 [2024-07-25 19:18:04.568244] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.140 [2024-07-25 19:18:04.568267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.140 [2024-07-25 19:18:04.568282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.140 [2024-07-25 19:18:04.571858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.140 [2024-07-25 19:18:04.581157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.140 [2024-07-25 19:18:04.581618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-25 19:18:04.581648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.140 [2024-07-25 19:18:04.581665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.140 [2024-07-25 19:18:04.581904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.140 [2024-07-25 19:18:04.582159] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.140 [2024-07-25 19:18:04.582183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.140 [2024-07-25 19:18:04.582198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.140 [2024-07-25 19:18:04.585787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.140 [2024-07-25 19:18:04.595077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.140 [2024-07-25 19:18:04.595533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-25 19:18:04.595564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.140 [2024-07-25 19:18:04.595581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.140 [2024-07-25 19:18:04.595820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.140 [2024-07-25 19:18:04.596068] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.140 [2024-07-25 19:18:04.596091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.140 [2024-07-25 19:18:04.596117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.140 [2024-07-25 19:18:04.599695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.399 [2024-07-25 19:18:04.608995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.399 [2024-07-25 19:18:04.609458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.400 [2024-07-25 19:18:04.609490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.400 [2024-07-25 19:18:04.609507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.400 [2024-07-25 19:18:04.609746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.400 [2024-07-25 19:18:04.609988] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.400 [2024-07-25 19:18:04.610010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.400 [2024-07-25 19:18:04.610025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.400 [2024-07-25 19:18:04.613611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.400 [2024-07-25 19:18:04.622902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.400 [2024-07-25 19:18:04.623346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.400 [2024-07-25 19:18:04.623377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.400 [2024-07-25 19:18:04.623394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.400 [2024-07-25 19:18:04.623633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.400 [2024-07-25 19:18:04.623875] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.400 [2024-07-25 19:18:04.623897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.400 [2024-07-25 19:18:04.623912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.400 [2024-07-25 19:18:04.627515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.400 [2024-07-25 19:18:04.636803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.400 [2024-07-25 19:18:04.637234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.400 [2024-07-25 19:18:04.637265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.400 [2024-07-25 19:18:04.637282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.400 [2024-07-25 19:18:04.637521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.400 [2024-07-25 19:18:04.637763] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.400 [2024-07-25 19:18:04.637786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.400 [2024-07-25 19:18:04.637801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.400 [2024-07-25 19:18:04.641390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.400 [2024-07-25 19:18:04.650667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.400 [2024-07-25 19:18:04.651118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.400 [2024-07-25 19:18:04.651149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.400 [2024-07-25 19:18:04.651166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.400 [2024-07-25 19:18:04.651405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.400 [2024-07-25 19:18:04.651647] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.400 [2024-07-25 19:18:04.651670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.400 [2024-07-25 19:18:04.651685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.400 [2024-07-25 19:18:04.655267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.400 [2024-07-25 19:18:04.664535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.400 [2024-07-25 19:18:04.664994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.400 [2024-07-25 19:18:04.665023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.400 [2024-07-25 19:18:04.665041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.400 [2024-07-25 19:18:04.665289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.400 [2024-07-25 19:18:04.665532] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.400 [2024-07-25 19:18:04.665555] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.400 [2024-07-25 19:18:04.665570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.400 [2024-07-25 19:18:04.669147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.400 [2024-07-25 19:18:04.678423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.400 [2024-07-25 19:18:04.678857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.400 [2024-07-25 19:18:04.678887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.400 [2024-07-25 19:18:04.678904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.400 [2024-07-25 19:18:04.679154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.400 [2024-07-25 19:18:04.679397] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.400 [2024-07-25 19:18:04.679420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.400 [2024-07-25 19:18:04.679435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.400 [2024-07-25 19:18:04.683022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.400 [2024-07-25 19:18:04.692348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.400 [2024-07-25 19:18:04.692799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.400 [2024-07-25 19:18:04.692835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.400 [2024-07-25 19:18:04.692853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.400 [2024-07-25 19:18:04.693092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.400 [2024-07-25 19:18:04.693345] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.400 [2024-07-25 19:18:04.693368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.400 [2024-07-25 19:18:04.693383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.400 [2024-07-25 19:18:04.696955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.400 [2024-07-25 19:18:04.706246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.400 [2024-07-25 19:18:04.706728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.400 [2024-07-25 19:18:04.706777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.400 [2024-07-25 19:18:04.706794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.400 [2024-07-25 19:18:04.707033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.400 [2024-07-25 19:18:04.707284] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.400 [2024-07-25 19:18:04.707307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.400 [2024-07-25 19:18:04.707322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.400 [2024-07-25 19:18:04.710898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.400 [2024-07-25 19:18:04.720185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.400 [2024-07-25 19:18:04.720636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.400 [2024-07-25 19:18:04.720667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.400 [2024-07-25 19:18:04.720684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.400 [2024-07-25 19:18:04.720922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.400 [2024-07-25 19:18:04.721176] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.400 [2024-07-25 19:18:04.721199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.400 [2024-07-25 19:18:04.721215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.400 [2024-07-25 19:18:04.724789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.400 [2024-07-25 19:18:04.734071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.400 [2024-07-25 19:18:04.734530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.400 [2024-07-25 19:18:04.734561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.400 [2024-07-25 19:18:04.734579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.401 [2024-07-25 19:18:04.734817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.401 [2024-07-25 19:18:04.735065] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.401 [2024-07-25 19:18:04.735088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.401 [2024-07-25 19:18:04.735113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.401 [2024-07-25 19:18:04.738691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.401 [2024-07-25 19:18:04.747969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.401 [2024-07-25 19:18:04.748424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.401 [2024-07-25 19:18:04.748455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.401 [2024-07-25 19:18:04.748472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.401 [2024-07-25 19:18:04.748711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.401 [2024-07-25 19:18:04.748953] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.401 [2024-07-25 19:18:04.748975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.401 [2024-07-25 19:18:04.748990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.401 [2024-07-25 19:18:04.752569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.401 [2024-07-25 19:18:04.761853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.401 [2024-07-25 19:18:04.762371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.401 [2024-07-25 19:18:04.762398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.401 [2024-07-25 19:18:04.762413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.401 [2024-07-25 19:18:04.762672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.401 [2024-07-25 19:18:04.762915] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.401 [2024-07-25 19:18:04.762938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.401 [2024-07-25 19:18:04.762952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.401 [2024-07-25 19:18:04.766532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.401 [2024-07-25 19:18:04.775811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.401 [2024-07-25 19:18:04.776260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.401 [2024-07-25 19:18:04.776291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.401 [2024-07-25 19:18:04.776308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.401 [2024-07-25 19:18:04.776546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.401 [2024-07-25 19:18:04.776789] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.401 [2024-07-25 19:18:04.776811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.401 [2024-07-25 19:18:04.776826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.401 [2024-07-25 19:18:04.780406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.401 [2024-07-25 19:18:04.789692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.401 [2024-07-25 19:18:04.790117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.401 [2024-07-25 19:18:04.790147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.401 [2024-07-25 19:18:04.790164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.401 [2024-07-25 19:18:04.790403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.401 [2024-07-25 19:18:04.790645] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.401 [2024-07-25 19:18:04.790668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.401 [2024-07-25 19:18:04.790683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.401 [2024-07-25 19:18:04.794263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.401 [2024-07-25 19:18:04.803538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.401 [2024-07-25 19:18:04.803996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.401 [2024-07-25 19:18:04.804026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.401 [2024-07-25 19:18:04.804044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.401 [2024-07-25 19:18:04.804292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.401 [2024-07-25 19:18:04.804535] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.401 [2024-07-25 19:18:04.804558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.401 [2024-07-25 19:18:04.804573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.401 [2024-07-25 19:18:04.808158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.401 [2024-07-25 19:18:04.817431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.401 [2024-07-25 19:18:04.817899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.401 [2024-07-25 19:18:04.817940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.401 [2024-07-25 19:18:04.817956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.401 [2024-07-25 19:18:04.818234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.401 [2024-07-25 19:18:04.818485] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.401 [2024-07-25 19:18:04.818508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.401 [2024-07-25 19:18:04.818523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.401 [2024-07-25 19:18:04.822090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.401 [2024-07-25 19:18:04.831371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.401 [2024-07-25 19:18:04.831819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.401 [2024-07-25 19:18:04.831846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.401 [2024-07-25 19:18:04.831868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.401 [2024-07-25 19:18:04.832127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.401 [2024-07-25 19:18:04.832382] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.401 [2024-07-25 19:18:04.832405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.401 [2024-07-25 19:18:04.832421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.401 [2024-07-25 19:18:04.835990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.401 [2024-07-25 19:18:04.845274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.401 [2024-07-25 19:18:04.845732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.401 [2024-07-25 19:18:04.845763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.401 [2024-07-25 19:18:04.845780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.401 [2024-07-25 19:18:04.846018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.401 [2024-07-25 19:18:04.846273] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.401 [2024-07-25 19:18:04.846296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.401 [2024-07-25 19:18:04.846312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.401 [2024-07-25 19:18:04.849884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.401 [2024-07-25 19:18:04.859168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.401 [2024-07-25 19:18:04.859594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.401 [2024-07-25 19:18:04.859624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.401 [2024-07-25 19:18:04.859642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.401 [2024-07-25 19:18:04.859880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.401 [2024-07-25 19:18:04.860134] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.401 [2024-07-25 19:18:04.860158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.401 [2024-07-25 19:18:04.860173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.401 [2024-07-25 19:18:04.863745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.660 [2024-07-25 19:18:04.873021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.660 [2024-07-25 19:18:04.873474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.660 [2024-07-25 19:18:04.873504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.660 [2024-07-25 19:18:04.873522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.660 [2024-07-25 19:18:04.873760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.660 [2024-07-25 19:18:04.874003] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.660 [2024-07-25 19:18:04.874031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.660 [2024-07-25 19:18:04.874046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.660 [2024-07-25 19:18:04.877627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.660 [2024-07-25 19:18:04.886937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.660 [2024-07-25 19:18:04.887400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.660 [2024-07-25 19:18:04.887431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.660 [2024-07-25 19:18:04.887448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.660 [2024-07-25 19:18:04.887687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.660 [2024-07-25 19:18:04.887929] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.660 [2024-07-25 19:18:04.887952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.660 [2024-07-25 19:18:04.887967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.660 [2024-07-25 19:18:04.891556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.660 [2024-07-25 19:18:04.900853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.660 [2024-07-25 19:18:04.901321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.660 [2024-07-25 19:18:04.901351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.660 [2024-07-25 19:18:04.901369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.660 [2024-07-25 19:18:04.901607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.660 [2024-07-25 19:18:04.901849] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.660 [2024-07-25 19:18:04.901872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.660 [2024-07-25 19:18:04.901887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.661 [2024-07-25 19:18:04.905474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.661 [2024-07-25 19:18:04.914770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.661 [2024-07-25 19:18:04.915221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.661 [2024-07-25 19:18:04.915252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.661 [2024-07-25 19:18:04.915270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.661 [2024-07-25 19:18:04.915509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.661 [2024-07-25 19:18:04.915751] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.661 [2024-07-25 19:18:04.915774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.661 [2024-07-25 19:18:04.915789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.661 [2024-07-25 19:18:04.919378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.661 [2024-07-25 19:18:04.928678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.661 [2024-07-25 19:18:04.929134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.661 [2024-07-25 19:18:04.929165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.661 [2024-07-25 19:18:04.929183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.661 [2024-07-25 19:18:04.929422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.661 [2024-07-25 19:18:04.929664] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.661 [2024-07-25 19:18:04.929686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.661 [2024-07-25 19:18:04.929701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.661 [2024-07-25 19:18:04.933287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.661 [2024-07-25 19:18:04.942593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.661 [2024-07-25 19:18:04.943022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.661 [2024-07-25 19:18:04.943053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.661 [2024-07-25 19:18:04.943070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.661 [2024-07-25 19:18:04.943318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.661 [2024-07-25 19:18:04.943561] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.661 [2024-07-25 19:18:04.943584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.661 [2024-07-25 19:18:04.943599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.661 [2024-07-25 19:18:04.947184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.661 [2024-07-25 19:18:04.956476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.661 [2024-07-25 19:18:04.956907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.661 [2024-07-25 19:18:04.956938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.661 [2024-07-25 19:18:04.956956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.661 [2024-07-25 19:18:04.957208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.661 [2024-07-25 19:18:04.957451] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.661 [2024-07-25 19:18:04.957474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.661 [2024-07-25 19:18:04.957490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.661 [2024-07-25 19:18:04.961068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.661 [2024-07-25 19:18:04.970369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.661 [2024-07-25 19:18:04.970818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.661 [2024-07-25 19:18:04.970848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.661 [2024-07-25 19:18:04.970865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.661 [2024-07-25 19:18:04.971122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.661 [2024-07-25 19:18:04.971365] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.661 [2024-07-25 19:18:04.971388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.661 [2024-07-25 19:18:04.971403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.661 [2024-07-25 19:18:04.974978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.661 [2024-07-25 19:18:04.984290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.661 [2024-07-25 19:18:04.984741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.661 [2024-07-25 19:18:04.984771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.661 [2024-07-25 19:18:04.984789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.661 [2024-07-25 19:18:04.985028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.661 [2024-07-25 19:18:04.985283] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.661 [2024-07-25 19:18:04.985307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.661 [2024-07-25 19:18:04.985322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.661 [2024-07-25 19:18:04.988897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.661 [2024-07-25 19:18:04.998198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.661 [2024-07-25 19:18:04.998629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.661 [2024-07-25 19:18:04.998660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.661 [2024-07-25 19:18:04.998678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.661 [2024-07-25 19:18:04.998917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.661 [2024-07-25 19:18:04.999173] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.661 [2024-07-25 19:18:04.999196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.661 [2024-07-25 19:18:04.999211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.661 [2024-07-25 19:18:05.002796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.661 [2024-07-25 19:18:05.012097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.661 [2024-07-25 19:18:05.012576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.661 [2024-07-25 19:18:05.012607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.661 [2024-07-25 19:18:05.012624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.661 [2024-07-25 19:18:05.012863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.661 [2024-07-25 19:18:05.013117] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.661 [2024-07-25 19:18:05.013141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.661 [2024-07-25 19:18:05.013162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.661 [2024-07-25 19:18:05.016740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.661 [2024-07-25 19:18:05.026036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.661 [2024-07-25 19:18:05.026492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.661 [2024-07-25 19:18:05.026523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.661 [2024-07-25 19:18:05.026541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.661 [2024-07-25 19:18:05.026779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.661 [2024-07-25 19:18:05.027022] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.661 [2024-07-25 19:18:05.027044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.661 [2024-07-25 19:18:05.027060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.661 [2024-07-25 19:18:05.030646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.661 [2024-07-25 19:18:05.039939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.661 [2024-07-25 19:18:05.040386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.661 [2024-07-25 19:18:05.040417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.661 [2024-07-25 19:18:05.040435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.661 [2024-07-25 19:18:05.040674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.661 [2024-07-25 19:18:05.040917] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.661 [2024-07-25 19:18:05.040939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.661 [2024-07-25 19:18:05.040954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.661 [2024-07-25 19:18:05.044541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.662 [2024-07-25 19:18:05.053844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.662 [2024-07-25 19:18:05.054287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.662 [2024-07-25 19:18:05.054317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.662 [2024-07-25 19:18:05.054335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.662 [2024-07-25 19:18:05.054573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.662 [2024-07-25 19:18:05.054815] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.662 [2024-07-25 19:18:05.054838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.662 [2024-07-25 19:18:05.054853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.662 [2024-07-25 19:18:05.058437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.662 [2024-07-25 19:18:05.067728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.662 [2024-07-25 19:18:05.068152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.662 [2024-07-25 19:18:05.068189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.662 [2024-07-25 19:18:05.068207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.662 [2024-07-25 19:18:05.068446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.662 [2024-07-25 19:18:05.068689] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.662 [2024-07-25 19:18:05.068711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.662 [2024-07-25 19:18:05.068726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.662 [2024-07-25 19:18:05.072316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.662 [2024-07-25 19:18:05.081611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.662 [2024-07-25 19:18:05.082059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.662 [2024-07-25 19:18:05.082089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.662 [2024-07-25 19:18:05.082118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.662 [2024-07-25 19:18:05.082359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.662 [2024-07-25 19:18:05.082601] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.662 [2024-07-25 19:18:05.082624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.662 [2024-07-25 19:18:05.082639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.662 [2024-07-25 19:18:05.086239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.662 [2024-07-25 19:18:05.095533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.662 [2024-07-25 19:18:05.095984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.662 [2024-07-25 19:18:05.096014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.662 [2024-07-25 19:18:05.096032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.662 [2024-07-25 19:18:05.096283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.662 [2024-07-25 19:18:05.096526] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.662 [2024-07-25 19:18:05.096548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.662 [2024-07-25 19:18:05.096564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.662 [2024-07-25 19:18:05.100149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.662 [2024-07-25 19:18:05.109444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.662 [2024-07-25 19:18:05.109895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.662 [2024-07-25 19:18:05.109925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.662 [2024-07-25 19:18:05.109943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.662 [2024-07-25 19:18:05.110195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.662 [2024-07-25 19:18:05.110444] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.662 [2024-07-25 19:18:05.110467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.662 [2024-07-25 19:18:05.110482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.662 [2024-07-25 19:18:05.114061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.662 [2024-07-25 19:18:05.123360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.662 [2024-07-25 19:18:05.123810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.662 [2024-07-25 19:18:05.123840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.662 [2024-07-25 19:18:05.123858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.662 [2024-07-25 19:18:05.124095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.662 [2024-07-25 19:18:05.124351] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.662 [2024-07-25 19:18:05.124373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.662 [2024-07-25 19:18:05.124388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.662 [2024-07-25 19:18:05.127969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.921 [2024-07-25 19:18:05.137274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.921 [2024-07-25 19:18:05.137733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.921 [2024-07-25 19:18:05.137764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.921 [2024-07-25 19:18:05.137781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.921 [2024-07-25 19:18:05.138019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.921 [2024-07-25 19:18:05.138274] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.921 [2024-07-25 19:18:05.138298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.921 [2024-07-25 19:18:05.138313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.921 [2024-07-25 19:18:05.141893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.921 [2024-07-25 19:18:05.151195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.921 [2024-07-25 19:18:05.151619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.921 [2024-07-25 19:18:05.151649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.921 [2024-07-25 19:18:05.151666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.921 [2024-07-25 19:18:05.151905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.921 [2024-07-25 19:18:05.152160] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.921 [2024-07-25 19:18:05.152184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.921 [2024-07-25 19:18:05.152199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.921 [2024-07-25 19:18:05.155782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.921 [2024-07-25 19:18:05.165078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.921 [2024-07-25 19:18:05.165541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.921 [2024-07-25 19:18:05.165572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.921 [2024-07-25 19:18:05.165589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.921 [2024-07-25 19:18:05.165828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.921 [2024-07-25 19:18:05.166070] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.921 [2024-07-25 19:18:05.166093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.921 [2024-07-25 19:18:05.166121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.921 [2024-07-25 19:18:05.169699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.921 [2024-07-25 19:18:05.178999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.921 [2024-07-25 19:18:05.179432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.921 [2024-07-25 19:18:05.179463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.921 [2024-07-25 19:18:05.179481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.921 [2024-07-25 19:18:05.179721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.921 [2024-07-25 19:18:05.179963] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.921 [2024-07-25 19:18:05.179986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.921 [2024-07-25 19:18:05.180002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.921 [2024-07-25 19:18:05.183591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.921 [2024-07-25 19:18:05.192906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.921 [2024-07-25 19:18:05.193326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.921 [2024-07-25 19:18:05.193357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.921 [2024-07-25 19:18:05.193375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.921 [2024-07-25 19:18:05.193613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.921 [2024-07-25 19:18:05.193856] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.921 [2024-07-25 19:18:05.193879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.921 [2024-07-25 19:18:05.193894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.921 [2024-07-25 19:18:05.197481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.921 [2024-07-25 19:18:05.206914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.921 [2024-07-25 19:18:05.207376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.921 [2024-07-25 19:18:05.207407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.921 [2024-07-25 19:18:05.207430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.921 [2024-07-25 19:18:05.207670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.921 [2024-07-25 19:18:05.207912] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.921 [2024-07-25 19:18:05.207935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.921 [2024-07-25 19:18:05.207951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.921 [2024-07-25 19:18:05.211538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.921 [2024-07-25 19:18:05.220837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.921 [2024-07-25 19:18:05.221276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.921 [2024-07-25 19:18:05.221306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.921 [2024-07-25 19:18:05.221324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.921 [2024-07-25 19:18:05.221564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.921 [2024-07-25 19:18:05.221806] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.921 [2024-07-25 19:18:05.221829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.921 [2024-07-25 19:18:05.221844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.921 [2024-07-25 19:18:05.225437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.921 [2024-07-25 19:18:05.234733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.921 [2024-07-25 19:18:05.235199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.921 [2024-07-25 19:18:05.235230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.921 [2024-07-25 19:18:05.235248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.921 [2024-07-25 19:18:05.235487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.921 [2024-07-25 19:18:05.235729] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.921 [2024-07-25 19:18:05.235751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.921 [2024-07-25 19:18:05.235766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.921 [2024-07-25 19:18:05.239354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.921 [2024-07-25 19:18:05.248656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.921 [2024-07-25 19:18:05.249215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.921 [2024-07-25 19:18:05.249248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.921 [2024-07-25 19:18:05.249265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.921 [2024-07-25 19:18:05.249505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.921 [2024-07-25 19:18:05.249748] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.921 [2024-07-25 19:18:05.249775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.921 [2024-07-25 19:18:05.249791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.921 [2024-07-25 19:18:05.253383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.921 [2024-07-25 19:18:05.262683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.921 [2024-07-25 19:18:05.263114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.921 [2024-07-25 19:18:05.263144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.921 [2024-07-25 19:18:05.263162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.921 [2024-07-25 19:18:05.263401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.921 [2024-07-25 19:18:05.263642] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.922 [2024-07-25 19:18:05.263666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.922 [2024-07-25 19:18:05.263680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.922 [2024-07-25 19:18:05.267265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.922 [2024-07-25 19:18:05.276592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.922 [2024-07-25 19:18:05.277045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.922 [2024-07-25 19:18:05.277076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.922 [2024-07-25 19:18:05.277093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.922 [2024-07-25 19:18:05.277343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.922 [2024-07-25 19:18:05.277585] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.922 [2024-07-25 19:18:05.277608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.922 [2024-07-25 19:18:05.277623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.922 [2024-07-25 19:18:05.281214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.922 [2024-07-25 19:18:05.290530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.922 [2024-07-25 19:18:05.290993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.922 [2024-07-25 19:18:05.291024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.922 [2024-07-25 19:18:05.291042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.922 [2024-07-25 19:18:05.291291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.922 [2024-07-25 19:18:05.291534] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.922 [2024-07-25 19:18:05.291557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.922 [2024-07-25 19:18:05.291572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.922 [2024-07-25 19:18:05.295152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.922 [2024-07-25 19:18:05.304445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.922 [2024-07-25 19:18:05.304895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.922 [2024-07-25 19:18:05.304925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.922 [2024-07-25 19:18:05.304943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.922 [2024-07-25 19:18:05.305191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.922 [2024-07-25 19:18:05.305435] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.922 [2024-07-25 19:18:05.305457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.922 [2024-07-25 19:18:05.305472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.922 [2024-07-25 19:18:05.309049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.922 [2024-07-25 19:18:05.318342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.922 [2024-07-25 19:18:05.318799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.922 [2024-07-25 19:18:05.318829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.922 [2024-07-25 19:18:05.318846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.922 [2024-07-25 19:18:05.319085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.922 [2024-07-25 19:18:05.319337] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.922 [2024-07-25 19:18:05.319361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.922 [2024-07-25 19:18:05.319376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.922 [2024-07-25 19:18:05.322949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.922 [2024-07-25 19:18:05.332250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.922 [2024-07-25 19:18:05.332679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.922 [2024-07-25 19:18:05.332709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.922 [2024-07-25 19:18:05.332727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.922 [2024-07-25 19:18:05.332965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.922 [2024-07-25 19:18:05.333227] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.922 [2024-07-25 19:18:05.333250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.922 [2024-07-25 19:18:05.333266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.922 [2024-07-25 19:18:05.336845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.922 [2024-07-25 19:18:05.346155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.922 [2024-07-25 19:18:05.346612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.922 [2024-07-25 19:18:05.346642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.922 [2024-07-25 19:18:05.346666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.922 [2024-07-25 19:18:05.346906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.922 [2024-07-25 19:18:05.347161] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.922 [2024-07-25 19:18:05.347184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.922 [2024-07-25 19:18:05.347199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.922 [2024-07-25 19:18:05.350774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.922 [2024-07-25 19:18:05.360072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.922 [2024-07-25 19:18:05.360537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.922 [2024-07-25 19:18:05.360568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.922 [2024-07-25 19:18:05.360585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.922 [2024-07-25 19:18:05.360825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.922 [2024-07-25 19:18:05.361068] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.922 [2024-07-25 19:18:05.361090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.922 [2024-07-25 19:18:05.361117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.922 [2024-07-25 19:18:05.364695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.922 [2024-07-25 19:18:05.374028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.922 [2024-07-25 19:18:05.374467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.922 [2024-07-25 19:18:05.374498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.922 [2024-07-25 19:18:05.374516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.922 [2024-07-25 19:18:05.374754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.922 [2024-07-25 19:18:05.374996] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.922 [2024-07-25 19:18:05.375019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.922 [2024-07-25 19:18:05.375034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.922 [2024-07-25 19:18:05.378626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.922 [2024-07-25 19:18:05.387944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.922 [2024-07-25 19:18:05.388403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.922 [2024-07-25 19:18:05.388434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:12.922 [2024-07-25 19:18:05.388451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:12.922 [2024-07-25 19:18:05.388689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:12.922 [2024-07-25 19:18:05.388932] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.922 [2024-07-25 19:18:05.388960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.922 [2024-07-25 19:18:05.388976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.181 [2024-07-25 19:18:05.392608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.181 [2024-07-25 19:18:05.401918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.181 [2024-07-25 19:18:05.402354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-07-25 19:18:05.402385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.181 [2024-07-25 19:18:05.402403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.181 [2024-07-25 19:18:05.402641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.181 [2024-07-25 19:18:05.402884] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.181 [2024-07-25 19:18:05.402907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.181 [2024-07-25 19:18:05.402922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.181 [2024-07-25 19:18:05.406514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.181 [2024-07-25 19:18:05.415817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.181 [2024-07-25 19:18:05.416269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-07-25 19:18:05.416301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.181 [2024-07-25 19:18:05.416319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.181 [2024-07-25 19:18:05.416558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.181 [2024-07-25 19:18:05.416800] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.181 [2024-07-25 19:18:05.416823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.181 [2024-07-25 19:18:05.416838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.181 [2024-07-25 19:18:05.420433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.181 [2024-07-25 19:18:05.429733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.181 [2024-07-25 19:18:05.430162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-07-25 19:18:05.430194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.181 [2024-07-25 19:18:05.430211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.181 [2024-07-25 19:18:05.430450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.181 [2024-07-25 19:18:05.430692] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.181 [2024-07-25 19:18:05.430715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.181 [2024-07-25 19:18:05.430730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.181 [2024-07-25 19:18:05.434319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.181 [2024-07-25 19:18:05.443619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.181 [2024-07-25 19:18:05.444047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-07-25 19:18:05.444078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.181 [2024-07-25 19:18:05.444096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.181 [2024-07-25 19:18:05.444343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.181 [2024-07-25 19:18:05.444586] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.181 [2024-07-25 19:18:05.444609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.181 [2024-07-25 19:18:05.444624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.181 [2024-07-25 19:18:05.448210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.181 [2024-07-25 19:18:05.457508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.181 [2024-07-25 19:18:05.458059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-07-25 19:18:05.458118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.181 [2024-07-25 19:18:05.458138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.181 [2024-07-25 19:18:05.458377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.181 [2024-07-25 19:18:05.458620] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.181 [2024-07-25 19:18:05.458643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.181 [2024-07-25 19:18:05.458657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.181 [2024-07-25 19:18:05.462243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.181 [2024-07-25 19:18:05.471548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.181 [2024-07-25 19:18:05.471996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-07-25 19:18:05.472027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.182 [2024-07-25 19:18:05.472044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.182 [2024-07-25 19:18:05.472295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.182 [2024-07-25 19:18:05.472538] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.182 [2024-07-25 19:18:05.472561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.182 [2024-07-25 19:18:05.472576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.182 [2024-07-25 19:18:05.476161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.182 [2024-07-25 19:18:05.485479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.182 [2024-07-25 19:18:05.485968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-07-25 19:18:05.486016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.182 [2024-07-25 19:18:05.486034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.182 [2024-07-25 19:18:05.486290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.182 [2024-07-25 19:18:05.486533] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.182 [2024-07-25 19:18:05.486557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.182 [2024-07-25 19:18:05.486571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.182 [2024-07-25 19:18:05.490184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.182 [2024-07-25 19:18:05.499497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.182 [2024-07-25 19:18:05.499976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-07-25 19:18:05.500024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.182 [2024-07-25 19:18:05.500041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.182 [2024-07-25 19:18:05.500291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.182 [2024-07-25 19:18:05.500534] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.182 [2024-07-25 19:18:05.500557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.182 [2024-07-25 19:18:05.500572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.182 [2024-07-25 19:18:05.504171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.182 [2024-07-25 19:18:05.513488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.182 [2024-07-25 19:18:05.513919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-07-25 19:18:05.513950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.182 [2024-07-25 19:18:05.513967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.182 [2024-07-25 19:18:05.514217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.182 [2024-07-25 19:18:05.514460] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.182 [2024-07-25 19:18:05.514483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.182 [2024-07-25 19:18:05.514499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.182 [2024-07-25 19:18:05.518074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.182 [2024-07-25 19:18:05.527364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.182 [2024-07-25 19:18:05.527849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-07-25 19:18:05.527909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.182 [2024-07-25 19:18:05.527926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.182 [2024-07-25 19:18:05.528176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.182 [2024-07-25 19:18:05.528419] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.182 [2024-07-25 19:18:05.528442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.182 [2024-07-25 19:18:05.528466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.182 [2024-07-25 19:18:05.532046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.182 [2024-07-25 19:18:05.541343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.182 [2024-07-25 19:18:05.541855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-07-25 19:18:05.541904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.182 [2024-07-25 19:18:05.541923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.182 [2024-07-25 19:18:05.542171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.182 [2024-07-25 19:18:05.542414] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.182 [2024-07-25 19:18:05.542436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.182 [2024-07-25 19:18:05.542452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.182 [2024-07-25 19:18:05.546035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.182 [2024-07-25 19:18:05.555337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.182 [2024-07-25 19:18:05.555842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-07-25 19:18:05.555890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.182 [2024-07-25 19:18:05.555907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.182 [2024-07-25 19:18:05.556159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.182 [2024-07-25 19:18:05.556403] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.182 [2024-07-25 19:18:05.556426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.182 [2024-07-25 19:18:05.556441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.182 [2024-07-25 19:18:05.560015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.182 [2024-07-25 19:18:05.569315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.182 [2024-07-25 19:18:05.569806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-07-25 19:18:05.569837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.182 [2024-07-25 19:18:05.569855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.182 [2024-07-25 19:18:05.570093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.182 [2024-07-25 19:18:05.570346] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.182 [2024-07-25 19:18:05.570369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.182 [2024-07-25 19:18:05.570384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.182 [2024-07-25 19:18:05.573957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.182 [2024-07-25 19:18:05.583258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.182 [2024-07-25 19:18:05.583749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-07-25 19:18:05.583784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.182 [2024-07-25 19:18:05.583802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.182 [2024-07-25 19:18:05.584040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.182 [2024-07-25 19:18:05.584292] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.182 [2024-07-25 19:18:05.584315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.182 [2024-07-25 19:18:05.584330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.182 [2024-07-25 19:18:05.587919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.182 [2024-07-25 19:18:05.597223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.182 [2024-07-25 19:18:05.597648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-07-25 19:18:05.597679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.182 [2024-07-25 19:18:05.597697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.182 [2024-07-25 19:18:05.597936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.182 [2024-07-25 19:18:05.598188] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.182 [2024-07-25 19:18:05.598212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.182 [2024-07-25 19:18:05.598227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.182 [2024-07-25 19:18:05.601797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.182 [2024-07-25 19:18:05.611097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.182 [2024-07-25 19:18:05.611537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-07-25 19:18:05.611567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.183 [2024-07-25 19:18:05.611584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.183 [2024-07-25 19:18:05.611823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.183 [2024-07-25 19:18:05.612066] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.183 [2024-07-25 19:18:05.612088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.183 [2024-07-25 19:18:05.612112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.183 [2024-07-25 19:18:05.615688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.183 [2024-07-25 19:18:05.625003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.183 [2024-07-25 19:18:05.625444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-07-25 19:18:05.625475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.183 [2024-07-25 19:18:05.625492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.183 [2024-07-25 19:18:05.625730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.183 [2024-07-25 19:18:05.625979] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.183 [2024-07-25 19:18:05.626002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.183 [2024-07-25 19:18:05.626017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.183 [2024-07-25 19:18:05.629601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.183 [2024-07-25 19:18:05.638898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.183 [2024-07-25 19:18:05.639336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-07-25 19:18:05.639367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.183 [2024-07-25 19:18:05.639384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.183 [2024-07-25 19:18:05.639623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.183 [2024-07-25 19:18:05.639865] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.183 [2024-07-25 19:18:05.639888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.183 [2024-07-25 19:18:05.639904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.183 [2024-07-25 19:18:05.643493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.441 [2024-07-25 19:18:05.652824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.442 [2024-07-25 19:18:05.653309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-07-25 19:18:05.653340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.442 [2024-07-25 19:18:05.653358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.442 [2024-07-25 19:18:05.653597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.442 [2024-07-25 19:18:05.653839] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.442 [2024-07-25 19:18:05.653861] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.442 [2024-07-25 19:18:05.653876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.442 [2024-07-25 19:18:05.657456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.442 [2024-07-25 19:18:05.666772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.442 [2024-07-25 19:18:05.667229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-07-25 19:18:05.667260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.442 [2024-07-25 19:18:05.667278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.442 [2024-07-25 19:18:05.667517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.442 [2024-07-25 19:18:05.667759] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.442 [2024-07-25 19:18:05.667781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.442 [2024-07-25 19:18:05.667796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.442 [2024-07-25 19:18:05.671385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.442 [2024-07-25 19:18:05.680686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.442 [2024-07-25 19:18:05.681146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-07-25 19:18:05.681177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.442 [2024-07-25 19:18:05.681195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.442 [2024-07-25 19:18:05.681434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.442 [2024-07-25 19:18:05.681676] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.442 [2024-07-25 19:18:05.681699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.442 [2024-07-25 19:18:05.681714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.442 [2024-07-25 19:18:05.685303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.442 [2024-07-25 19:18:05.694612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.442 [2024-07-25 19:18:05.695085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-07-25 19:18:05.695124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.442 [2024-07-25 19:18:05.695143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.442 [2024-07-25 19:18:05.695382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.442 [2024-07-25 19:18:05.695624] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.442 [2024-07-25 19:18:05.695647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.442 [2024-07-25 19:18:05.695662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.442 [2024-07-25 19:18:05.699241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.442 [2024-07-25 19:18:05.708548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.442 [2024-07-25 19:18:05.708974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-07-25 19:18:05.709005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.442 [2024-07-25 19:18:05.709022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.442 [2024-07-25 19:18:05.709269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.442 [2024-07-25 19:18:05.709512] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.442 [2024-07-25 19:18:05.709535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.442 [2024-07-25 19:18:05.709551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.442 [2024-07-25 19:18:05.713212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.442 [2024-07-25 19:18:05.722504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.442 [2024-07-25 19:18:05.722958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-07-25 19:18:05.722989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.442 [2024-07-25 19:18:05.723013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.442 [2024-07-25 19:18:05.723264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.442 [2024-07-25 19:18:05.723513] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.442 [2024-07-25 19:18:05.723535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.442 [2024-07-25 19:18:05.723550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.442 [2024-07-25 19:18:05.727139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.442 [2024-07-25 19:18:05.736453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.442 [2024-07-25 19:18:05.736923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-07-25 19:18:05.736953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.442 [2024-07-25 19:18:05.736971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.442 [2024-07-25 19:18:05.737229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.442 [2024-07-25 19:18:05.737472] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.442 [2024-07-25 19:18:05.737495] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.442 [2024-07-25 19:18:05.737509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.442 [2024-07-25 19:18:05.741082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.442 [2024-07-25 19:18:05.750389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.442 [2024-07-25 19:18:05.750874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-07-25 19:18:05.750905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.442 [2024-07-25 19:18:05.750922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.442 [2024-07-25 19:18:05.751172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.442 [2024-07-25 19:18:05.751415] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.442 [2024-07-25 19:18:05.751438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.442 [2024-07-25 19:18:05.751454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.442 [2024-07-25 19:18:05.755027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.442 [2024-07-25 19:18:05.764341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.442 [2024-07-25 19:18:05.764824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-07-25 19:18:05.764854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.442 [2024-07-25 19:18:05.764872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.442 [2024-07-25 19:18:05.765120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.442 [2024-07-25 19:18:05.765367] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.442 [2024-07-25 19:18:05.765395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.442 [2024-07-25 19:18:05.765411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.442 [2024-07-25 19:18:05.768986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.442 [2024-07-25 19:18:05.778318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.442 [2024-07-25 19:18:05.778761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-07-25 19:18:05.778791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.442 [2024-07-25 19:18:05.778809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.442 [2024-07-25 19:18:05.779048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.442 [2024-07-25 19:18:05.779301] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.442 [2024-07-25 19:18:05.779324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.442 [2024-07-25 19:18:05.779339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.442 [2024-07-25 19:18:05.782914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.442 [2024-07-25 19:18:05.792218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.443 [2024-07-25 19:18:05.792646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-07-25 19:18:05.792676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.443 [2024-07-25 19:18:05.792694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.443 [2024-07-25 19:18:05.792940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.443 [2024-07-25 19:18:05.793204] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.443 [2024-07-25 19:18:05.793228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.443 [2024-07-25 19:18:05.793243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.443 [2024-07-25 19:18:05.796815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.443 [2024-07-25 19:18:05.806121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.443 [2024-07-25 19:18:05.806549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-07-25 19:18:05.806580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.443 [2024-07-25 19:18:05.806597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.443 [2024-07-25 19:18:05.806836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.443 [2024-07-25 19:18:05.807086] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.443 [2024-07-25 19:18:05.807119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.443 [2024-07-25 19:18:05.807137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.443 [2024-07-25 19:18:05.810710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.443 [2024-07-25 19:18:05.820029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.443 [2024-07-25 19:18:05.820466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-07-25 19:18:05.820498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.443 [2024-07-25 19:18:05.820516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.443 [2024-07-25 19:18:05.820755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.443 [2024-07-25 19:18:05.820997] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.443 [2024-07-25 19:18:05.821020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.443 [2024-07-25 19:18:05.821035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.443 [2024-07-25 19:18:05.824626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.443 [2024-07-25 19:18:05.833931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.443 [2024-07-25 19:18:05.834346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-07-25 19:18:05.834387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.443 [2024-07-25 19:18:05.834405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.443 [2024-07-25 19:18:05.834643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.443 [2024-07-25 19:18:05.834885] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.443 [2024-07-25 19:18:05.834908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.443 [2024-07-25 19:18:05.834922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.443 [2024-07-25 19:18:05.838512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.443 [2024-07-25 19:18:05.847805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.443 [2024-07-25 19:18:05.848266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-07-25 19:18:05.848297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.443 [2024-07-25 19:18:05.848315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.443 [2024-07-25 19:18:05.848554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.443 [2024-07-25 19:18:05.848796] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.443 [2024-07-25 19:18:05.848819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.443 [2024-07-25 19:18:05.848833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.443 [2024-07-25 19:18:05.852413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.443 [2024-07-25 19:18:05.861715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.443 [2024-07-25 19:18:05.862191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-07-25 19:18:05.862224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.443 [2024-07-25 19:18:05.862242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.443 [2024-07-25 19:18:05.862486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.443 [2024-07-25 19:18:05.862729] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.443 [2024-07-25 19:18:05.862752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.443 [2024-07-25 19:18:05.862766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.443 [2024-07-25 19:18:05.866356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.443 [2024-07-25 19:18:05.875661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.443 [2024-07-25 19:18:05.876115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-07-25 19:18:05.876152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.443 [2024-07-25 19:18:05.876169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.443 [2024-07-25 19:18:05.876408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.443 [2024-07-25 19:18:05.876651] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.443 [2024-07-25 19:18:05.876674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.443 [2024-07-25 19:18:05.876689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.443 [2024-07-25 19:18:05.880276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.443 [2024-07-25 19:18:05.889574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.443 [2024-07-25 19:18:05.890057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-07-25 19:18:05.890087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.443 [2024-07-25 19:18:05.890114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.443 [2024-07-25 19:18:05.890356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.443 [2024-07-25 19:18:05.890598] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.443 [2024-07-25 19:18:05.890620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.443 [2024-07-25 19:18:05.890636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.443 [2024-07-25 19:18:05.894215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.443 [2024-07-25 19:18:05.903507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.443 [2024-07-25 19:18:05.903962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-07-25 19:18:05.903992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.443 [2024-07-25 19:18:05.904016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.443 [2024-07-25 19:18:05.904266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.443 [2024-07-25 19:18:05.904509] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.443 [2024-07-25 19:18:05.904531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.443 [2024-07-25 19:18:05.904552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.443 [2024-07-25 19:18:05.908139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.702 [2024-07-25 19:18:05.917457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.702 [2024-07-25 19:18:05.917912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.702 [2024-07-25 19:18:05.917942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.702 [2024-07-25 19:18:05.917960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.702 [2024-07-25 19:18:05.918211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.702 [2024-07-25 19:18:05.918454] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.702 [2024-07-25 19:18:05.918477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.702 [2024-07-25 19:18:05.918492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.702 [2024-07-25 19:18:05.922063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.702 [2024-07-25 19:18:05.931353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.702 [2024-07-25 19:18:05.931786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.702 [2024-07-25 19:18:05.931816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.702 [2024-07-25 19:18:05.931834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.702 [2024-07-25 19:18:05.932073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.702 [2024-07-25 19:18:05.932324] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.702 [2024-07-25 19:18:05.932348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.702 [2024-07-25 19:18:05.932363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.702 [2024-07-25 19:18:05.935938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.702 [2024-07-25 19:18:05.945236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.702 [2024-07-25 19:18:05.945700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.702 [2024-07-25 19:18:05.945730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.702 [2024-07-25 19:18:05.945747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.702 [2024-07-25 19:18:05.945986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.702 [2024-07-25 19:18:05.946237] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.702 [2024-07-25 19:18:05.946261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.702 [2024-07-25 19:18:05.946276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.702 [2024-07-25 19:18:05.949858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.702 [2024-07-25 19:18:05.959149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.702 [2024-07-25 19:18:05.959609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.702 [2024-07-25 19:18:05.959639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.702 [2024-07-25 19:18:05.959656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.702 [2024-07-25 19:18:05.959894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.702 [2024-07-25 19:18:05.960145] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.702 [2024-07-25 19:18:05.960169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.702 [2024-07-25 19:18:05.960184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.702 [2024-07-25 19:18:05.963776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.702 [2024-07-25 19:18:05.973082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.702 [2024-07-25 19:18:05.973547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.702 [2024-07-25 19:18:05.973578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.702 [2024-07-25 19:18:05.973595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.702 [2024-07-25 19:18:05.973834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.702 [2024-07-25 19:18:05.974077] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.702 [2024-07-25 19:18:05.974099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.702 [2024-07-25 19:18:05.974128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.702 [2024-07-25 19:18:05.977702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.702 [2024-07-25 19:18:05.987027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.702 [2024-07-25 19:18:05.987499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.702 [2024-07-25 19:18:05.987530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.702 [2024-07-25 19:18:05.987547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.702 [2024-07-25 19:18:05.987786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.702 [2024-07-25 19:18:05.988029] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.702 [2024-07-25 19:18:05.988051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.702 [2024-07-25 19:18:05.988066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.702 [2024-07-25 19:18:05.991650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.702 [2024-07-25 19:18:06.000932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.702 [2024-07-25 19:18:06.001369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.702 [2024-07-25 19:18:06.001399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.702 [2024-07-25 19:18:06.001417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.703 [2024-07-25 19:18:06.001665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.703 [2024-07-25 19:18:06.001907] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.703 [2024-07-25 19:18:06.001930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.703 [2024-07-25 19:18:06.001946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.703 [2024-07-25 19:18:06.005527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.703 [2024-07-25 19:18:06.014858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.703 [2024-07-25 19:18:06.015308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.703 [2024-07-25 19:18:06.015339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.703 [2024-07-25 19:18:06.015356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.703 [2024-07-25 19:18:06.015594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.703 [2024-07-25 19:18:06.015837] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.703 [2024-07-25 19:18:06.015860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.703 [2024-07-25 19:18:06.015875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.703 [2024-07-25 19:18:06.019455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.703 [2024-07-25 19:18:06.028768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.703 [2024-07-25 19:18:06.029199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.703 [2024-07-25 19:18:06.029237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.703 [2024-07-25 19:18:06.029255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.703 [2024-07-25 19:18:06.029494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.703 [2024-07-25 19:18:06.029737] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.703 [2024-07-25 19:18:06.029760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.703 [2024-07-25 19:18:06.029775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.703 [2024-07-25 19:18:06.033365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.703 [2024-07-25 19:18:06.042671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.703 [2024-07-25 19:18:06.043116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.703 [2024-07-25 19:18:06.043151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.703 [2024-07-25 19:18:06.043169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.703 [2024-07-25 19:18:06.043407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.703 [2024-07-25 19:18:06.043649] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.703 [2024-07-25 19:18:06.043672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.703 [2024-07-25 19:18:06.043693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.703 [2024-07-25 19:18:06.047288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.703 [2024-07-25 19:18:06.056596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.703 [2024-07-25 19:18:06.057051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.703 [2024-07-25 19:18:06.057081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.703 [2024-07-25 19:18:06.057099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.703 [2024-07-25 19:18:06.057349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.703 [2024-07-25 19:18:06.057591] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.703 [2024-07-25 19:18:06.057614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.703 [2024-07-25 19:18:06.057629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.703 [2024-07-25 19:18:06.061210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.703 [2024-07-25 19:18:06.070500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.703 [2024-07-25 19:18:06.070954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.703 [2024-07-25 19:18:06.070984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.703 [2024-07-25 19:18:06.071001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.703 [2024-07-25 19:18:06.071250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.703 [2024-07-25 19:18:06.071503] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.703 [2024-07-25 19:18:06.071526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.703 [2024-07-25 19:18:06.071541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.703 [2024-07-25 19:18:06.075121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.703 [2024-07-25 19:18:06.084427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.703 [2024-07-25 19:18:06.084876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.703 [2024-07-25 19:18:06.084907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.703 [2024-07-25 19:18:06.084924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.703 [2024-07-25 19:18:06.085175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.703 [2024-07-25 19:18:06.085417] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.703 [2024-07-25 19:18:06.085440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.703 [2024-07-25 19:18:06.085455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.703 [2024-07-25 19:18:06.089044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.703 [2024-07-25 19:18:06.098341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.703 [2024-07-25 19:18:06.098777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.703 [2024-07-25 19:18:06.098813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.703 [2024-07-25 19:18:06.098831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.703 [2024-07-25 19:18:06.099083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.703 [2024-07-25 19:18:06.099336] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.703 [2024-07-25 19:18:06.099360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.703 [2024-07-25 19:18:06.099375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.703 [2024-07-25 19:18:06.102952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.703 [2024-07-25 19:18:06.112276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.703 [2024-07-25 19:18:06.112736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.703 [2024-07-25 19:18:06.112767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.703 [2024-07-25 19:18:06.112785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.703 [2024-07-25 19:18:06.113023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.703 [2024-07-25 19:18:06.113276] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.703 [2024-07-25 19:18:06.113299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.703 [2024-07-25 19:18:06.113314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.703 [2024-07-25 19:18:06.116888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.703 [2024-07-25 19:18:06.126205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.703 [2024-07-25 19:18:06.126649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.704 [2024-07-25 19:18:06.126679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.704 [2024-07-25 19:18:06.126696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.704 [2024-07-25 19:18:06.126935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.704 [2024-07-25 19:18:06.127188] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.704 [2024-07-25 19:18:06.127212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.704 [2024-07-25 19:18:06.127227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.704 [2024-07-25 19:18:06.130799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.704 [2024-07-25 19:18:06.140086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.704 [2024-07-25 19:18:06.140549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.704 [2024-07-25 19:18:06.140578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.704 [2024-07-25 19:18:06.140596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.704 [2024-07-25 19:18:06.140834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.704 [2024-07-25 19:18:06.141083] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.704 [2024-07-25 19:18:06.141114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.704 [2024-07-25 19:18:06.141132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.704 [2024-07-25 19:18:06.144710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.704 [2024-07-25 19:18:06.154001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.704 [2024-07-25 19:18:06.154447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.704 [2024-07-25 19:18:06.154477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.704 [2024-07-25 19:18:06.154495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.704 [2024-07-25 19:18:06.154733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.704 [2024-07-25 19:18:06.154975] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.704 [2024-07-25 19:18:06.154998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.704 [2024-07-25 19:18:06.155013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.704 [2024-07-25 19:18:06.158596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.704 [2024-07-25 19:18:06.167881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.704 [2024-07-25 19:18:06.168315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.704 [2024-07-25 19:18:06.168345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.704 [2024-07-25 19:18:06.168362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.704 [2024-07-25 19:18:06.168600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.704 [2024-07-25 19:18:06.168842] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.704 [2024-07-25 19:18:06.168865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.704 [2024-07-25 19:18:06.168880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.962 [2024-07-25 19:18:06.172458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.962 [2024-07-25 19:18:06.181747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.962 [2024-07-25 19:18:06.182186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-25 19:18:06.182217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.962 [2024-07-25 19:18:06.182234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.962 [2024-07-25 19:18:06.182473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.962 [2024-07-25 19:18:06.182716] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.962 [2024-07-25 19:18:06.182739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.962 [2024-07-25 19:18:06.182754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.962 [2024-07-25 19:18:06.186342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.962 [2024-07-25 19:18:06.195658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.962 [2024-07-25 19:18:06.196118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-25 19:18:06.196149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.963 [2024-07-25 19:18:06.196167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.963 [2024-07-25 19:18:06.196405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.963 [2024-07-25 19:18:06.196648] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.963 [2024-07-25 19:18:06.196671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.963 [2024-07-25 19:18:06.196686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.963 [2024-07-25 19:18:06.200266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.963 [2024-07-25 19:18:06.209570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.963 [2024-07-25 19:18:06.210021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-25 19:18:06.210051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.963 [2024-07-25 19:18:06.210068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.963 [2024-07-25 19:18:06.210315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.963 [2024-07-25 19:18:06.210558] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.963 [2024-07-25 19:18:06.210580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.963 [2024-07-25 19:18:06.210595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.963 [2024-07-25 19:18:06.214173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.963 [2024-07-25 19:18:06.223471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.963 [2024-07-25 19:18:06.223899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-25 19:18:06.223930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.963 [2024-07-25 19:18:06.223947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.963 [2024-07-25 19:18:06.224196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.963 [2024-07-25 19:18:06.224439] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.963 [2024-07-25 19:18:06.224461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.963 [2024-07-25 19:18:06.224477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.963 [2024-07-25 19:18:06.228083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.963 [2024-07-25 19:18:06.237386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.963 [2024-07-25 19:18:06.237847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-25 19:18:06.237878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.963 [2024-07-25 19:18:06.237905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.963 [2024-07-25 19:18:06.238158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.963 [2024-07-25 19:18:06.238401] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.963 [2024-07-25 19:18:06.238424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.963 [2024-07-25 19:18:06.238439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.963 [2024-07-25 19:18:06.242012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.963 [2024-07-25 19:18:06.251311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.963 [2024-07-25 19:18:06.251735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-25 19:18:06.251765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.963 [2024-07-25 19:18:06.251782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.963 [2024-07-25 19:18:06.252021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.963 [2024-07-25 19:18:06.252271] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.963 [2024-07-25 19:18:06.252295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.963 [2024-07-25 19:18:06.252311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.963 [2024-07-25 19:18:06.255888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.963 [2024-07-25 19:18:06.265188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.963 [2024-07-25 19:18:06.265646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-25 19:18:06.265676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.963 [2024-07-25 19:18:06.265694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.963 [2024-07-25 19:18:06.265933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.963 [2024-07-25 19:18:06.266188] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.963 [2024-07-25 19:18:06.266211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.963 [2024-07-25 19:18:06.266227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.963 [2024-07-25 19:18:06.269798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.963 [2024-07-25 19:18:06.279145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.963 [2024-07-25 19:18:06.279553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-25 19:18:06.279584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.963 [2024-07-25 19:18:06.279601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.963 [2024-07-25 19:18:06.279841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.963 [2024-07-25 19:18:06.280083] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.963 [2024-07-25 19:18:06.280123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.963 [2024-07-25 19:18:06.280140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.963 [2024-07-25 19:18:06.283718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.963 [2024-07-25 19:18:06.293072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.963 [2024-07-25 19:18:06.293554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-25 19:18:06.293593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.963 [2024-07-25 19:18:06.293611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.963 [2024-07-25 19:18:06.293850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.963 [2024-07-25 19:18:06.294093] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.963 [2024-07-25 19:18:06.294126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.963 [2024-07-25 19:18:06.294142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.963 [2024-07-25 19:18:06.297718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.963 [2024-07-25 19:18:06.307012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.963 [2024-07-25 19:18:06.307477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-25 19:18:06.307508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.963 [2024-07-25 19:18:06.307525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.963 [2024-07-25 19:18:06.307764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.963 [2024-07-25 19:18:06.308006] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.963 [2024-07-25 19:18:06.308029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.963 [2024-07-25 19:18:06.308045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.963 [2024-07-25 19:18:06.311639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.963 [2024-07-25 19:18:06.320924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.963 [2024-07-25 19:18:06.321345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-25 19:18:06.321375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.963 [2024-07-25 19:18:06.321392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.963 [2024-07-25 19:18:06.321630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.963 [2024-07-25 19:18:06.321873] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.963 [2024-07-25 19:18:06.321895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.963 [2024-07-25 19:18:06.321911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.963 [2024-07-25 19:18:06.325501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.963 [2024-07-25 19:18:06.334795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.964 [2024-07-25 19:18:06.335264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-25 19:18:06.335295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.964 [2024-07-25 19:18:06.335313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.964 [2024-07-25 19:18:06.335552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.964 [2024-07-25 19:18:06.335794] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.964 [2024-07-25 19:18:06.335817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.964 [2024-07-25 19:18:06.335832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.964 [2024-07-25 19:18:06.339413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.964 [2024-07-25 19:18:06.348710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.964 [2024-07-25 19:18:06.349184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-25 19:18:06.349223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.964 [2024-07-25 19:18:06.349241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.964 [2024-07-25 19:18:06.349479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.964 [2024-07-25 19:18:06.349723] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.964 [2024-07-25 19:18:06.349745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.964 [2024-07-25 19:18:06.349760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.964 [2024-07-25 19:18:06.353344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.964 [2024-07-25 19:18:06.362652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.964 [2024-07-25 19:18:06.363100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-25 19:18:06.363149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.964 [2024-07-25 19:18:06.363166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.964 [2024-07-25 19:18:06.363405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.964 [2024-07-25 19:18:06.363655] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.964 [2024-07-25 19:18:06.363678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.964 [2024-07-25 19:18:06.363693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.964 [2024-07-25 19:18:06.367278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.964 [2024-07-25 19:18:06.376615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.964 [2024-07-25 19:18:06.377076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-25 19:18:06.377114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.964 [2024-07-25 19:18:06.377133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.964 [2024-07-25 19:18:06.377378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.964 [2024-07-25 19:18:06.377621] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.964 [2024-07-25 19:18:06.377643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.964 [2024-07-25 19:18:06.377659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.964 [2024-07-25 19:18:06.381244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.964 [2024-07-25 19:18:06.390544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.964 [2024-07-25 19:18:06.390977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-25 19:18:06.391008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.964 [2024-07-25 19:18:06.391026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.964 [2024-07-25 19:18:06.391275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.964 [2024-07-25 19:18:06.391518] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.964 [2024-07-25 19:18:06.391541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.964 [2024-07-25 19:18:06.391557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.964 [2024-07-25 19:18:06.395140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.964 [2024-07-25 19:18:06.404430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.964 [2024-07-25 19:18:06.404830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-25 19:18:06.404860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.964 [2024-07-25 19:18:06.404878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.964 [2024-07-25 19:18:06.405127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.964 [2024-07-25 19:18:06.405371] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.964 [2024-07-25 19:18:06.405394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.964 [2024-07-25 19:18:06.405409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.964 [2024-07-25 19:18:06.408986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.964 [2024-07-25 19:18:06.418282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.964 [2024-07-25 19:18:06.418732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-25 19:18:06.418763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:13.964 [2024-07-25 19:18:06.418781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:13.964 [2024-07-25 19:18:06.419019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:13.964 [2024-07-25 19:18:06.419271] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.964 [2024-07-25 19:18:06.419294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.964 [2024-07-25 19:18:06.419315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.964 [2024-07-25 19:18:06.422887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.964 [2024-07-25 19:18:06.432201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.223 [2024-07-25 19:18:06.432659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.223 [2024-07-25 19:18:06.432690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.223 [2024-07-25 19:18:06.432707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.223 [2024-07-25 19:18:06.432946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.223 [2024-07-25 19:18:06.433197] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.224 [2024-07-25 19:18:06.433221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.224 [2024-07-25 19:18:06.433236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.224 [2024-07-25 19:18:06.436808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.224 [2024-07-25 19:18:06.446098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.224 [2024-07-25 19:18:06.446558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.224 [2024-07-25 19:18:06.446588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.224 [2024-07-25 19:18:06.446605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.224 [2024-07-25 19:18:06.446844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.224 [2024-07-25 19:18:06.447086] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.224 [2024-07-25 19:18:06.447118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.224 [2024-07-25 19:18:06.447135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.224 [2024-07-25 19:18:06.450709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.224 [2024-07-25 19:18:06.460002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.224 [2024-07-25 19:18:06.460466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.224 [2024-07-25 19:18:06.460497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.224 [2024-07-25 19:18:06.460514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.224 [2024-07-25 19:18:06.460753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.224 [2024-07-25 19:18:06.460995] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.224 [2024-07-25 19:18:06.461018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.224 [2024-07-25 19:18:06.461033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.224 [2024-07-25 19:18:06.464625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.224 [2024-07-25 19:18:06.473948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.224 [2024-07-25 19:18:06.474394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.224 [2024-07-25 19:18:06.474425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.224 [2024-07-25 19:18:06.474443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.224 [2024-07-25 19:18:06.474682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.224 [2024-07-25 19:18:06.474924] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.224 [2024-07-25 19:18:06.474947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.224 [2024-07-25 19:18:06.474962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.224 [2024-07-25 19:18:06.478553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.224 [2024-07-25 19:18:06.487860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.224 [2024-07-25 19:18:06.488335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.224 [2024-07-25 19:18:06.488366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.224 [2024-07-25 19:18:06.488383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.224 [2024-07-25 19:18:06.488622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.224 [2024-07-25 19:18:06.488875] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.224 [2024-07-25 19:18:06.488900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.224 [2024-07-25 19:18:06.488915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.224 [2024-07-25 19:18:06.492493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.224 [2024-07-25 19:18:06.501772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.224 [2024-07-25 19:18:06.502210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.224 [2024-07-25 19:18:06.502242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.224 [2024-07-25 19:18:06.502260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.224 [2024-07-25 19:18:06.502499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.224 [2024-07-25 19:18:06.502741] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.224 [2024-07-25 19:18:06.502765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.224 [2024-07-25 19:18:06.502780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.224 [2024-07-25 19:18:06.506370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.224 [2024-07-25 19:18:06.515666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.224 [2024-07-25 19:18:06.516096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.224 [2024-07-25 19:18:06.516134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.224 [2024-07-25 19:18:06.516152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.224 [2024-07-25 19:18:06.516397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.224 [2024-07-25 19:18:06.516640] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.224 [2024-07-25 19:18:06.516663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.224 [2024-07-25 19:18:06.516678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.224 [2024-07-25 19:18:06.520263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.224 [2024-07-25 19:18:06.529555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.224 [2024-07-25 19:18:06.530008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.224 [2024-07-25 19:18:06.530038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.224 [2024-07-25 19:18:06.530055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.224 [2024-07-25 19:18:06.530303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.224 [2024-07-25 19:18:06.530547] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.224 [2024-07-25 19:18:06.530569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.224 [2024-07-25 19:18:06.530584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.224 [2024-07-25 19:18:06.534175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.224 [2024-07-25 19:18:06.543476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.224 [2024-07-25 19:18:06.543944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.224 [2024-07-25 19:18:06.543975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.224 [2024-07-25 19:18:06.543992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.224 [2024-07-25 19:18:06.544244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.224 [2024-07-25 19:18:06.544487] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.224 [2024-07-25 19:18:06.544510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.224 [2024-07-25 19:18:06.544525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.224 [2024-07-25 19:18:06.548113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.224 [2024-07-25 19:18:06.557411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.224 [2024-07-25 19:18:06.557863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.224 [2024-07-25 19:18:06.557893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.224 [2024-07-25 19:18:06.557911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.224 [2024-07-25 19:18:06.558162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.224 [2024-07-25 19:18:06.558405] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.224 [2024-07-25 19:18:06.558427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.225 [2024-07-25 19:18:06.558443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.225 [2024-07-25 19:18:06.562029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.225 [2024-07-25 19:18:06.571343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.225 [2024-07-25 19:18:06.571772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.225 [2024-07-25 19:18:06.571803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.225 [2024-07-25 19:18:06.571822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.225 [2024-07-25 19:18:06.572061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.225 [2024-07-25 19:18:06.572316] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.225 [2024-07-25 19:18:06.572340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.225 [2024-07-25 19:18:06.572355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.225 [2024-07-25 19:18:06.575952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.225 [2024-07-25 19:18:06.585266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.225 [2024-07-25 19:18:06.585723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.225 [2024-07-25 19:18:06.585754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.225 [2024-07-25 19:18:06.585771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.225 [2024-07-25 19:18:06.586010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.225 [2024-07-25 19:18:06.586263] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.225 [2024-07-25 19:18:06.586286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.225 [2024-07-25 19:18:06.586302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.225 [2024-07-25 19:18:06.589893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.225 [2024-07-25 19:18:06.599204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.225 [2024-07-25 19:18:06.599632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.225 [2024-07-25 19:18:06.599662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.225 [2024-07-25 19:18:06.599680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.225 [2024-07-25 19:18:06.599918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.225 [2024-07-25 19:18:06.600172] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.225 [2024-07-25 19:18:06.600213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.225 [2024-07-25 19:18:06.600229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.225 [2024-07-25 19:18:06.603811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.225 [2024-07-25 19:18:06.613123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.225 [2024-07-25 19:18:06.613651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.225 [2024-07-25 19:18:06.613687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.225 [2024-07-25 19:18:06.613705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.225 [2024-07-25 19:18:06.613944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.225 [2024-07-25 19:18:06.614200] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.225 [2024-07-25 19:18:06.614224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.225 [2024-07-25 19:18:06.614240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.225 [2024-07-25 19:18:06.617820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.225 [2024-07-25 19:18:06.627140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.225 [2024-07-25 19:18:06.627599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.225 [2024-07-25 19:18:06.627630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.225 [2024-07-25 19:18:06.627648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.225 [2024-07-25 19:18:06.627887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.225 [2024-07-25 19:18:06.628142] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.225 [2024-07-25 19:18:06.628166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.225 [2024-07-25 19:18:06.628181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.225 [2024-07-25 19:18:06.631761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.225 [2024-07-25 19:18:06.641091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.225 [2024-07-25 19:18:06.641615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.225 [2024-07-25 19:18:06.641645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.225 [2024-07-25 19:18:06.641663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.225 [2024-07-25 19:18:06.641902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.225 [2024-07-25 19:18:06.642160] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.225 [2024-07-25 19:18:06.642183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.225 [2024-07-25 19:18:06.642198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.225 [2024-07-25 19:18:06.645775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.225 [2024-07-25 19:18:06.655079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.225 [2024-07-25 19:18:06.655549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.225 [2024-07-25 19:18:06.655580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.225 [2024-07-25 19:18:06.655598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.225 [2024-07-25 19:18:06.655836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.225 [2024-07-25 19:18:06.656084] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.225 [2024-07-25 19:18:06.656119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.225 [2024-07-25 19:18:06.656136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.225 [2024-07-25 19:18:06.659713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.225 [2024-07-25 19:18:06.669014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.225 [2024-07-25 19:18:06.669494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.225 [2024-07-25 19:18:06.669525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.225 [2024-07-25 19:18:06.669542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.225 [2024-07-25 19:18:06.669781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.225 [2024-07-25 19:18:06.670023] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.225 [2024-07-25 19:18:06.670046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.225 [2024-07-25 19:18:06.670061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.225 [2024-07-25 19:18:06.673691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.225 [2024-07-25 19:18:06.682996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.225 [2024-07-25 19:18:06.683469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.225 [2024-07-25 19:18:06.683501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.225 [2024-07-25 19:18:06.683518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.225 [2024-07-25 19:18:06.683757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.225 [2024-07-25 19:18:06.683999] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.225 [2024-07-25 19:18:06.684022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.225 [2024-07-25 19:18:06.684038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.225 [2024-07-25 19:18:06.687621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.484 [2024-07-25 19:18:06.696929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.484 [2024-07-25 19:18:06.697369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.484 [2024-07-25 19:18:06.697400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.484 [2024-07-25 19:18:06.697417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.484 [2024-07-25 19:18:06.697655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.484 [2024-07-25 19:18:06.697898] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.484 [2024-07-25 19:18:06.697921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.484 [2024-07-25 19:18:06.697936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.484 [2024-07-25 19:18:06.701525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.484 [2024-07-25 19:18:06.710845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.484 [2024-07-25 19:18:06.711308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.484 [2024-07-25 19:18:06.711339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.484 [2024-07-25 19:18:06.711356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.484 [2024-07-25 19:18:06.711595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.484 [2024-07-25 19:18:06.711837] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.484 [2024-07-25 19:18:06.711861] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.484 [2024-07-25 19:18:06.711876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.484 [2024-07-25 19:18:06.715456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.484 [2024-07-25 19:18:06.724754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.484 [2024-07-25 19:18:06.725210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.484 [2024-07-25 19:18:06.725241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.484 [2024-07-25 19:18:06.725259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.484 [2024-07-25 19:18:06.725498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.484 [2024-07-25 19:18:06.725740] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.484 [2024-07-25 19:18:06.725763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.484 [2024-07-25 19:18:06.725778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.484 [2024-07-25 19:18:06.729369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.484 [2024-07-25 19:18:06.738661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.484 [2024-07-25 19:18:06.739109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.484 [2024-07-25 19:18:06.739140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.484 [2024-07-25 19:18:06.739158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.484 [2024-07-25 19:18:06.739396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.484 [2024-07-25 19:18:06.739638] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.484 [2024-07-25 19:18:06.739661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.484 [2024-07-25 19:18:06.739676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.484 [2024-07-25 19:18:06.743258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.484 [2024-07-25 19:18:06.752551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.484 [2024-07-25 19:18:06.752979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.484 [2024-07-25 19:18:06.753010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.484 [2024-07-25 19:18:06.753034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.484 [2024-07-25 19:18:06.753283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.484 [2024-07-25 19:18:06.753527] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.484 [2024-07-25 19:18:06.753549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.484 [2024-07-25 19:18:06.753564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.484 [2024-07-25 19:18:06.757155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.484 [2024-07-25 19:18:06.766444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.484 [2024-07-25 19:18:06.766899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.484 [2024-07-25 19:18:06.766930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.485 [2024-07-25 19:18:06.766948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.485 [2024-07-25 19:18:06.767197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.485 [2024-07-25 19:18:06.767440] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.485 [2024-07-25 19:18:06.767463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.485 [2024-07-25 19:18:06.767478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.485 [2024-07-25 19:18:06.771046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.485 [2024-07-25 19:18:06.780344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.485 [2024-07-25 19:18:06.780834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.485 [2024-07-25 19:18:06.780864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.485 [2024-07-25 19:18:06.780881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.485 [2024-07-25 19:18:06.781129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.485 [2024-07-25 19:18:06.781372] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.485 [2024-07-25 19:18:06.781395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.485 [2024-07-25 19:18:06.781410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.485 [2024-07-25 19:18:06.784997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.485 [2024-07-25 19:18:06.794304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.485 [2024-07-25 19:18:06.794765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.485 [2024-07-25 19:18:06.794795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.485 [2024-07-25 19:18:06.794812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.485 [2024-07-25 19:18:06.795050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.485 [2024-07-25 19:18:06.795302] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.485 [2024-07-25 19:18:06.795331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.485 [2024-07-25 19:18:06.795347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.485 [2024-07-25 19:18:06.798924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.485 [2024-07-25 19:18:06.808233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.485 [2024-07-25 19:18:06.808697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.485 [2024-07-25 19:18:06.808728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.485 [2024-07-25 19:18:06.808746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.485 [2024-07-25 19:18:06.808985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.485 [2024-07-25 19:18:06.809239] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.485 [2024-07-25 19:18:06.809262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.485 [2024-07-25 19:18:06.809277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.485 [2024-07-25 19:18:06.812851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.485 [2024-07-25 19:18:06.822142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.485 [2024-07-25 19:18:06.822596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.485 [2024-07-25 19:18:06.822626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.485 [2024-07-25 19:18:06.822644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.485 [2024-07-25 19:18:06.822882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.485 [2024-07-25 19:18:06.823136] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.485 [2024-07-25 19:18:06.823159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.485 [2024-07-25 19:18:06.823174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.485 [2024-07-25 19:18:06.826748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.485 [2024-07-25 19:18:06.836033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.485 [2024-07-25 19:18:06.836489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.485 [2024-07-25 19:18:06.836519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.485 [2024-07-25 19:18:06.836537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.485 [2024-07-25 19:18:06.836775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.485 [2024-07-25 19:18:06.837018] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.485 [2024-07-25 19:18:06.837040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.485 [2024-07-25 19:18:06.837055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.485 [2024-07-25 19:18:06.840637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.485 [2024-07-25 19:18:06.849918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.485 [2024-07-25 19:18:06.850382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.485 [2024-07-25 19:18:06.850412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.485 [2024-07-25 19:18:06.850429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.485 [2024-07-25 19:18:06.850668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.485 [2024-07-25 19:18:06.850909] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.485 [2024-07-25 19:18:06.850932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.485 [2024-07-25 19:18:06.850947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.485 [2024-07-25 19:18:06.854530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.485 [2024-07-25 19:18:06.863807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.485 [2024-07-25 19:18:06.864270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.485 [2024-07-25 19:18:06.864300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.485 [2024-07-25 19:18:06.864317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.485 [2024-07-25 19:18:06.864555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.485 [2024-07-25 19:18:06.864798] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.485 [2024-07-25 19:18:06.864820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.485 [2024-07-25 19:18:06.864836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.485 [2024-07-25 19:18:06.868420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.485 [2024-07-25 19:18:06.877702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.485 [2024-07-25 19:18:06.878112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.485 [2024-07-25 19:18:06.878143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.485 [2024-07-25 19:18:06.878161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.485 [2024-07-25 19:18:06.878400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.485 [2024-07-25 19:18:06.878642] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.485 [2024-07-25 19:18:06.878664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.485 [2024-07-25 19:18:06.878679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.485 [2024-07-25 19:18:06.882260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.485 [2024-07-25 19:18:06.891553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.485 [2024-07-25 19:18:06.892011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.485 [2024-07-25 19:18:06.892041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.485 [2024-07-25 19:18:06.892058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.485 [2024-07-25 19:18:06.892313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.485 [2024-07-25 19:18:06.892557] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.486 [2024-07-25 19:18:06.892579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.486 [2024-07-25 19:18:06.892595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.486 [2024-07-25 19:18:06.896173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.486 [2024-07-25 19:18:06.905457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.486 [2024-07-25 19:18:06.905906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.486 [2024-07-25 19:18:06.905936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.486 [2024-07-25 19:18:06.905953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.486 [2024-07-25 19:18:06.906207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.486 [2024-07-25 19:18:06.906450] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.486 [2024-07-25 19:18:06.906473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.486 [2024-07-25 19:18:06.906488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.486 [2024-07-25 19:18:06.910057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.486 [2024-07-25 19:18:06.919347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.486 [2024-07-25 19:18:06.919816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.486 [2024-07-25 19:18:06.919845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.486 [2024-07-25 19:18:06.919862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.486 [2024-07-25 19:18:06.920111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.486 [2024-07-25 19:18:06.920354] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.486 [2024-07-25 19:18:06.920377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.486 [2024-07-25 19:18:06.920392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.486 [2024-07-25 19:18:06.923962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.486 [2024-07-25 19:18:06.933246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.486 [2024-07-25 19:18:06.933698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.486 [2024-07-25 19:18:06.933728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.486 [2024-07-25 19:18:06.933746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.486 [2024-07-25 19:18:06.933984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.486 [2024-07-25 19:18:06.934239] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.486 [2024-07-25 19:18:06.934262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.486 [2024-07-25 19:18:06.934286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.486 [2024-07-25 19:18:06.937858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.486 [2024-07-25 19:18:06.947149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.486 [2024-07-25 19:18:06.947600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.486 [2024-07-25 19:18:06.947631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.486 [2024-07-25 19:18:06.947648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.486 [2024-07-25 19:18:06.947887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.486 [2024-07-25 19:18:06.948139] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.486 [2024-07-25 19:18:06.948162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.486 [2024-07-25 19:18:06.948177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.486 [2024-07-25 19:18:06.951747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.745 [2024-07-25 19:18:06.961031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.745 [2024-07-25 19:18:06.961479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.745 [2024-07-25 19:18:06.961509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.745 [2024-07-25 19:18:06.961526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.745 [2024-07-25 19:18:06.961765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.745 [2024-07-25 19:18:06.962007] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.745 [2024-07-25 19:18:06.962030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.745 [2024-07-25 19:18:06.962045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.745 [2024-07-25 19:18:06.965628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.745 [2024-07-25 19:18:06.974911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.745 [2024-07-25 19:18:06.975370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.745 [2024-07-25 19:18:06.975401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.745 [2024-07-25 19:18:06.975418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.745 [2024-07-25 19:18:06.975657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.745 [2024-07-25 19:18:06.975899] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.745 [2024-07-25 19:18:06.975922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.745 [2024-07-25 19:18:06.975937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.745 [2024-07-25 19:18:06.979520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.745 [2024-07-25 19:18:06.988795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.745 [2024-07-25 19:18:06.989240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.745 [2024-07-25 19:18:06.989271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.745 [2024-07-25 19:18:06.989288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.745 [2024-07-25 19:18:06.989527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.745 [2024-07-25 19:18:06.989769] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.745 [2024-07-25 19:18:06.989792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.745 [2024-07-25 19:18:06.989807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.745 [2024-07-25 19:18:06.993403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.745 [2024-07-25 19:18:07.002682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.745 [2024-07-25 19:18:07.003113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.745 [2024-07-25 19:18:07.003145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.745 [2024-07-25 19:18:07.003163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.745 [2024-07-25 19:18:07.003402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.745 [2024-07-25 19:18:07.003644] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.745 [2024-07-25 19:18:07.003667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.745 [2024-07-25 19:18:07.003682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.745 [2024-07-25 19:18:07.007275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.745 [2024-07-25 19:18:07.016571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.745 [2024-07-25 19:18:07.017038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.745 [2024-07-25 19:18:07.017069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.745 [2024-07-25 19:18:07.017087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.745 [2024-07-25 19:18:07.017334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.745 [2024-07-25 19:18:07.017578] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.745 [2024-07-25 19:18:07.017601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.745 [2024-07-25 19:18:07.017616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.745 [2024-07-25 19:18:07.021195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.745 [2024-07-25 19:18:07.030476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.745 [2024-07-25 19:18:07.030906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.745 [2024-07-25 19:18:07.030937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.745 [2024-07-25 19:18:07.030955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.745 [2024-07-25 19:18:07.031211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.745 [2024-07-25 19:18:07.031461] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.745 [2024-07-25 19:18:07.031485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.745 [2024-07-25 19:18:07.031501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.745 [2024-07-25 19:18:07.035075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.745 [2024-07-25 19:18:07.044364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.745 [2024-07-25 19:18:07.044816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.745 [2024-07-25 19:18:07.044848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.745 [2024-07-25 19:18:07.044865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.745 [2024-07-25 19:18:07.045114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.745 [2024-07-25 19:18:07.045358] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.745 [2024-07-25 19:18:07.045380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.746 [2024-07-25 19:18:07.045395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.746 [2024-07-25 19:18:07.048969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.746 [2024-07-25 19:18:07.058262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.746 [2024-07-25 19:18:07.058710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.746 [2024-07-25 19:18:07.058741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.746 [2024-07-25 19:18:07.058759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.746 [2024-07-25 19:18:07.058998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.746 [2024-07-25 19:18:07.059251] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.746 [2024-07-25 19:18:07.059274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.746 [2024-07-25 19:18:07.059290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.746 [2024-07-25 19:18:07.062864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.746 [2024-07-25 19:18:07.072148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.746 [2024-07-25 19:18:07.072599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.746 [2024-07-25 19:18:07.072629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.746 [2024-07-25 19:18:07.072646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.746 [2024-07-25 19:18:07.072885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.746 [2024-07-25 19:18:07.073138] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.746 [2024-07-25 19:18:07.073161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.746 [2024-07-25 19:18:07.073177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.746 [2024-07-25 19:18:07.076758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.746 [2024-07-25 19:18:07.086041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.746 [2024-07-25 19:18:07.086491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.746 [2024-07-25 19:18:07.086522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.746 [2024-07-25 19:18:07.086539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.746 [2024-07-25 19:18:07.086777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.746 [2024-07-25 19:18:07.087019] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.746 [2024-07-25 19:18:07.087042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.746 [2024-07-25 19:18:07.087057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.746 [2024-07-25 19:18:07.090639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.746 [2024-07-25 19:18:07.099948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.746 [2024-07-25 19:18:07.100361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.746 [2024-07-25 19:18:07.100391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.746 [2024-07-25 19:18:07.100409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.746 [2024-07-25 19:18:07.100648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.746 [2024-07-25 19:18:07.100889] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.746 [2024-07-25 19:18:07.100912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.746 [2024-07-25 19:18:07.100927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.746 [2024-07-25 19:18:07.104508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.746 [2024-07-25 19:18:07.113793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.746 [2024-07-25 19:18:07.114259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.746 [2024-07-25 19:18:07.114290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.746 [2024-07-25 19:18:07.114307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.746 [2024-07-25 19:18:07.114546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.746 [2024-07-25 19:18:07.114788] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.746 [2024-07-25 19:18:07.114811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.746 [2024-07-25 19:18:07.114826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.746 [2024-07-25 19:18:07.118410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.746 [2024-07-25 19:18:07.127710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.746 [2024-07-25 19:18:07.128310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.746 [2024-07-25 19:18:07.128341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.746 [2024-07-25 19:18:07.128365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.746 [2024-07-25 19:18:07.128605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.746 [2024-07-25 19:18:07.128847] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.746 [2024-07-25 19:18:07.128870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.746 [2024-07-25 19:18:07.128885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.746 [2024-07-25 19:18:07.132467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.746 [2024-07-25 19:18:07.141550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.746 [2024-07-25 19:18:07.142162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.746 [2024-07-25 19:18:07.142193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.746 [2024-07-25 19:18:07.142211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.746 [2024-07-25 19:18:07.142450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.746 [2024-07-25 19:18:07.142692] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.746 [2024-07-25 19:18:07.142715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.746 [2024-07-25 19:18:07.142730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.746 [2024-07-25 19:18:07.146315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.746 [2024-07-25 19:18:07.155410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.746 [2024-07-25 19:18:07.155920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.746 [2024-07-25 19:18:07.155950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.746 [2024-07-25 19:18:07.155968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.746 [2024-07-25 19:18:07.156216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.746 [2024-07-25 19:18:07.156460] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.746 [2024-07-25 19:18:07.156483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.746 [2024-07-25 19:18:07.156499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.746 [2024-07-25 19:18:07.160073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1013753 Killed "${NVMF_APP[@]}" "$@" 00:27:14.746 19:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:14.746 19:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:14.746 19:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:14.746 19:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:14.746 19:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:14.746 19:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1014939 00:27:14.746 19:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:14.746 19:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1014939 00:27:14.746 19:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1014939 ']' 00:27:14.746 19:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.746 19:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:14.746 19:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.746 19:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:14.746 19:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:14.746 [2024-07-25 19:18:07.169380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.746 [2024-07-25 19:18:07.169816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.746 [2024-07-25 19:18:07.169846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.747 [2024-07-25 19:18:07.169863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.747 [2024-07-25 19:18:07.170112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.747 [2024-07-25 19:18:07.170355] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.747 [2024-07-25 19:18:07.170378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.747 [2024-07-25 19:18:07.170394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.747 [2024-07-25 19:18:07.173974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.747 [2024-07-25 19:18:07.183284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.747 [2024-07-25 19:18:07.183719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.747 [2024-07-25 19:18:07.183749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.747 [2024-07-25 19:18:07.183766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.747 [2024-07-25 19:18:07.184004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.747 [2024-07-25 19:18:07.184258] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.747 [2024-07-25 19:18:07.184282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.747 [2024-07-25 19:18:07.184297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.747 [2024-07-25 19:18:07.187873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.747 [2024-07-25 19:18:07.197201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.747 [2024-07-25 19:18:07.197649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.747 [2024-07-25 19:18:07.197679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.747 [2024-07-25 19:18:07.197696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.747 [2024-07-25 19:18:07.197935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.747 [2024-07-25 19:18:07.198195] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.747 [2024-07-25 19:18:07.198220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.747 [2024-07-25 19:18:07.198235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.747 [2024-07-25 19:18:07.201824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.747 [2024-07-25 19:18:07.211158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.747 [2024-07-25 19:18:07.211578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.747 [2024-07-25 19:18:07.211609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:14.747 [2024-07-25 19:18:07.211627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:14.747 [2024-07-25 19:18:07.211866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:14.747 [2024-07-25 19:18:07.212119] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.747 [2024-07-25 19:18:07.212142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.747 [2024-07-25 19:18:07.212158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.006 [2024-07-25 19:18:07.215734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.006 [2024-07-25 19:18:07.216379] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:15.006 [2024-07-25 19:18:07.216460] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:15.006 [2024-07-25 19:18:07.225035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.006 [2024-07-25 19:18:07.225471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.006 [2024-07-25 19:18:07.225502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.006 [2024-07-25 19:18:07.225520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.006 [2024-07-25 19:18:07.225758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.006 [2024-07-25 19:18:07.226000] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.006 [2024-07-25 19:18:07.226023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.006 [2024-07-25 19:18:07.226038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.006 [2024-07-25 19:18:07.229629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.006 [2024-07-25 19:18:07.239090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.006 [2024-07-25 19:18:07.239568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.006 [2024-07-25 19:18:07.239599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.006 [2024-07-25 19:18:07.239618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.006 [2024-07-25 19:18:07.239857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.006 [2024-07-25 19:18:07.240099] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.006 [2024-07-25 19:18:07.240138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.006 [2024-07-25 19:18:07.240154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.006 [2024-07-25 19:18:07.243727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.006 [2024-07-25 19:18:07.253023] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.006 [2024-07-25 19:18:07.253489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.006 [2024-07-25 19:18:07.253521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.006 [2024-07-25 19:18:07.253539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.006 [2024-07-25 19:18:07.253777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.006 [2024-07-25 19:18:07.254019] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.007 [2024-07-25 19:18:07.254042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.007 [2024-07-25 19:18:07.254058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.007 [2024-07-25 19:18:07.257639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.007 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.007 [2024-07-25 19:18:07.266936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.007 [2024-07-25 19:18:07.267432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.007 [2024-07-25 19:18:07.267463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.007 [2024-07-25 19:18:07.267481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.007 [2024-07-25 19:18:07.267719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.007 [2024-07-25 19:18:07.267962] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.007 [2024-07-25 19:18:07.267984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.007 [2024-07-25 19:18:07.268000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.007 [2024-07-25 19:18:07.271591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.007 [2024-07-25 19:18:07.280895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.007 [2024-07-25 19:18:07.281343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.007 [2024-07-25 19:18:07.281375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.007 [2024-07-25 19:18:07.281393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.007 [2024-07-25 19:18:07.281633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.007 [2024-07-25 19:18:07.281876] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.007 [2024-07-25 19:18:07.281898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.007 [2024-07-25 19:18:07.281914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.007 [2024-07-25 19:18:07.285502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.007 [2024-07-25 19:18:07.294812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.007 [2024-07-25 19:18:07.295265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.007 [2024-07-25 19:18:07.295296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.007 [2024-07-25 19:18:07.295314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.007 [2024-07-25 19:18:07.295552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.007 [2024-07-25 19:18:07.295794] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.007 [2024-07-25 19:18:07.295817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.007 [2024-07-25 19:18:07.295832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.007 [2024-07-25 19:18:07.299419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.007 [2024-07-25 19:18:07.305668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:15.007 [2024-07-25 19:18:07.308713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.007 [2024-07-25 19:18:07.309173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.007 [2024-07-25 19:18:07.309205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.007 [2024-07-25 19:18:07.309223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.007 [2024-07-25 19:18:07.309464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.007 [2024-07-25 19:18:07.309707] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.007 [2024-07-25 19:18:07.309730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.007 [2024-07-25 19:18:07.309746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.007 [2024-07-25 19:18:07.313343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.007 [2024-07-25 19:18:07.322658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.007 [2024-07-25 19:18:07.323234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.007 [2024-07-25 19:18:07.323286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.007 [2024-07-25 19:18:07.323306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.007 [2024-07-25 19:18:07.323556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.007 [2024-07-25 19:18:07.323802] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.007 [2024-07-25 19:18:07.323826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.007 [2024-07-25 19:18:07.323844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.007 [2024-07-25 19:18:07.327432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.007 [2024-07-25 19:18:07.336719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.007 [2024-07-25 19:18:07.337176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.007 [2024-07-25 19:18:07.337208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.007 [2024-07-25 19:18:07.337236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.007 [2024-07-25 19:18:07.337477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.007 [2024-07-25 19:18:07.337720] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.007 [2024-07-25 19:18:07.337744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.007 [2024-07-25 19:18:07.337760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.007 [2024-07-25 19:18:07.341368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.007 [2024-07-25 19:18:07.350669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.007 [2024-07-25 19:18:07.351154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.007 [2024-07-25 19:18:07.351186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.007 [2024-07-25 19:18:07.351205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.007 [2024-07-25 19:18:07.351444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.007 [2024-07-25 19:18:07.351686] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.007 [2024-07-25 19:18:07.351710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.007 [2024-07-25 19:18:07.351725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.007 [2024-07-25 19:18:07.355308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.007 [2024-07-25 19:18:07.364597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.007 [2024-07-25 19:18:07.365067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.007 [2024-07-25 19:18:07.365115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.007 [2024-07-25 19:18:07.365135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.007 [2024-07-25 19:18:07.365374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.007 [2024-07-25 19:18:07.365616] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.007 [2024-07-25 19:18:07.365639] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.008 [2024-07-25 19:18:07.365655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.008 [2024-07-25 19:18:07.369239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.008 [2024-07-25 19:18:07.378613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.008 [2024-07-25 19:18:07.379236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.008 [2024-07-25 19:18:07.379277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.008 [2024-07-25 19:18:07.379298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.008 [2024-07-25 19:18:07.379559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.008 [2024-07-25 19:18:07.379825] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.008 [2024-07-25 19:18:07.379849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.008 [2024-07-25 19:18:07.379876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.008 [2024-07-25 19:18:07.383487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.008 [2024-07-25 19:18:07.392603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.008 [2024-07-25 19:18:07.393066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.008 [2024-07-25 19:18:07.393115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.008 [2024-07-25 19:18:07.393135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.008 [2024-07-25 19:18:07.393375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.008 [2024-07-25 19:18:07.393625] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.008 [2024-07-25 19:18:07.393648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.008 [2024-07-25 19:18:07.393663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.008 [2024-07-25 19:18:07.397245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.008 [2024-07-25 19:18:07.406538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.008 [2024-07-25 19:18:07.407025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.008 [2024-07-25 19:18:07.407057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.008 [2024-07-25 19:18:07.407074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.008 [2024-07-25 19:18:07.407331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.008 [2024-07-25 19:18:07.407574] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.008 [2024-07-25 19:18:07.407598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.008 [2024-07-25 19:18:07.407613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.008 [2024-07-25 19:18:07.411199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.008 [2024-07-25 19:18:07.420493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.008 [2024-07-25 19:18:07.420935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.008 [2024-07-25 19:18:07.420966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.008 [2024-07-25 19:18:07.420984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.008 [2024-07-25 19:18:07.421233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.008 [2024-07-25 19:18:07.421476] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.008 [2024-07-25 19:18:07.421500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.008 [2024-07-25 19:18:07.421516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.008 [2024-07-25 19:18:07.425089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.008 [2024-07-25 19:18:07.431381] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:15.008 [2024-07-25 19:18:07.431429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:15.008 [2024-07-25 19:18:07.431454] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:15.008 [2024-07-25 19:18:07.431477] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:15.008 [2024-07-25 19:18:07.431496] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:15.008 [2024-07-25 19:18:07.431591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:15.008 [2024-07-25 19:18:07.431658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:15.008 [2024-07-25 19:18:07.431667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.008 [2024-07-25 19:18:07.434402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.008 [2024-07-25 19:18:07.434841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.008 [2024-07-25 19:18:07.434872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.008 [2024-07-25 19:18:07.434891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.008 [2024-07-25 19:18:07.435140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.008 [2024-07-25 19:18:07.435394] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.008 [2024-07-25 19:18:07.435417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.008 [2024-07-25 19:18:07.435433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.008 [2024-07-25 19:18:07.439029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.008 [2024-07-25 19:18:07.448356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.008 [2024-07-25 19:18:07.448963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.008 [2024-07-25 19:18:07.449005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.008 [2024-07-25 19:18:07.449025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.008 [2024-07-25 19:18:07.449283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.008 [2024-07-25 19:18:07.449534] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.008 [2024-07-25 19:18:07.449558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.008 [2024-07-25 19:18:07.449575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.008 [2024-07-25 19:18:07.453180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.008 [2024-07-25 19:18:07.462290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.008 [2024-07-25 19:18:07.462887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.008 [2024-07-25 19:18:07.462930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.008 [2024-07-25 19:18:07.462951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.008 [2024-07-25 19:18:07.463213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.008 [2024-07-25 19:18:07.463477] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.008 [2024-07-25 19:18:07.463502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.008 [2024-07-25 19:18:07.463520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.008 [2024-07-25 19:18:07.467117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.008 [2024-07-25 19:18:07.476246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.267 [2024-07-25 19:18:07.476815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.267 [2024-07-25 19:18:07.476855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.267 [2024-07-25 19:18:07.476882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.268 [2024-07-25 19:18:07.477141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.268 [2024-07-25 19:18:07.477388] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.268 [2024-07-25 19:18:07.477412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.268 [2024-07-25 19:18:07.477430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.268 [2024-07-25 19:18:07.481019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.268 [2024-07-25 19:18:07.490156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.268 [2024-07-25 19:18:07.490686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.268 [2024-07-25 19:18:07.490725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.268 [2024-07-25 19:18:07.490746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.268 [2024-07-25 19:18:07.490994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.268 [2024-07-25 19:18:07.491250] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.268 [2024-07-25 19:18:07.491274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.268 [2024-07-25 19:18:07.491291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.268 [2024-07-25 19:18:07.494887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.268 [2024-07-25 19:18:07.504205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.268 [2024-07-25 19:18:07.504766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.268 [2024-07-25 19:18:07.504811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.268 [2024-07-25 19:18:07.504832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.268 [2024-07-25 19:18:07.505082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.268 [2024-07-25 19:18:07.505340] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.268 [2024-07-25 19:18:07.505373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.268 [2024-07-25 19:18:07.505391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.268 [2024-07-25 19:18:07.508991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.268 [2024-07-25 19:18:07.518096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.268 [2024-07-25 19:18:07.518733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.268 [2024-07-25 19:18:07.518784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.268 [2024-07-25 19:18:07.518805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.268 [2024-07-25 19:18:07.519052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.268 [2024-07-25 19:18:07.519308] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.268 [2024-07-25 19:18:07.519332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.268 [2024-07-25 19:18:07.519360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.268 [2024-07-25 19:18:07.522964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.268 [2024-07-25 19:18:07.532043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.268 [2024-07-25 19:18:07.532505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.268 [2024-07-25 19:18:07.532536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.268 [2024-07-25 19:18:07.532554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.268 [2024-07-25 19:18:07.532794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.268 [2024-07-25 19:18:07.533036] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.268 [2024-07-25 19:18:07.533060] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.268 [2024-07-25 19:18:07.533075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.268 [2024-07-25 19:18:07.536656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.268 [2024-07-25 19:18:07.545963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.268 [2024-07-25 19:18:07.546443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.268 [2024-07-25 19:18:07.546473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.268 [2024-07-25 19:18:07.546491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.268 [2024-07-25 19:18:07.546730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.268 [2024-07-25 19:18:07.546972] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.268 [2024-07-25 19:18:07.546996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.268 [2024-07-25 19:18:07.547011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.268 [2024-07-25 19:18:07.550593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.268 [2024-07-25 19:18:07.559879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.268 [2024-07-25 19:18:07.560301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.268 [2024-07-25 19:18:07.560333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.268 [2024-07-25 19:18:07.560359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.268 [2024-07-25 19:18:07.560599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.268 [2024-07-25 19:18:07.560842] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.268 [2024-07-25 19:18:07.560865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.268 [2024-07-25 19:18:07.560880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.268 [2024-07-25 19:18:07.564463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.268 [2024-07-25 19:18:07.573752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.268 [2024-07-25 19:18:07.574219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.268 [2024-07-25 19:18:07.574250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.268 [2024-07-25 19:18:07.574268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.268 [2024-07-25 19:18:07.574507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.268 [2024-07-25 19:18:07.574749] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.268 [2024-07-25 19:18:07.574772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.268 [2024-07-25 19:18:07.574787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.268 [2024-07-25 19:18:07.578366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.268 [2024-07-25 19:18:07.587648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.268 [2024-07-25 19:18:07.588078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.268 [2024-07-25 19:18:07.588115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.268 [2024-07-25 19:18:07.588146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.268 [2024-07-25 19:18:07.588384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.268 [2024-07-25 19:18:07.588626] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.268 [2024-07-25 19:18:07.588649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.268 [2024-07-25 19:18:07.588664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.268 [2024-07-25 19:18:07.592242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.268 [2024-07-25 19:18:07.601553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.268 [2024-07-25 19:18:07.602013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.268 [2024-07-25 19:18:07.602044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.268 [2024-07-25 19:18:07.602061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.268 [2024-07-25 19:18:07.602308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.268 [2024-07-25 19:18:07.602551] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.268 [2024-07-25 19:18:07.602580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.268 [2024-07-25 19:18:07.602596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.268 [2024-07-25 19:18:07.606183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.268 [2024-07-25 19:18:07.615333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.268 [2024-07-25 19:18:07.615777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.268 [2024-07-25 19:18:07.615805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.269 [2024-07-25 19:18:07.615821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.269 [2024-07-25 19:18:07.616064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.269 [2024-07-25 19:18:07.616305] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.269 [2024-07-25 19:18:07.616327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.269 [2024-07-25 19:18:07.616341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.269 [2024-07-25 19:18:07.619588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.269 [2024-07-25 19:18:07.628945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.269 [2024-07-25 19:18:07.629368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.269 [2024-07-25 19:18:07.629395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.269 [2024-07-25 19:18:07.629416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.269 [2024-07-25 19:18:07.629658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.269 [2024-07-25 19:18:07.629864] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.269 [2024-07-25 19:18:07.629883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.269 [2024-07-25 19:18:07.629896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.269 [2024-07-25 19:18:07.633150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.269 [2024-07-25 19:18:07.642387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.269 [2024-07-25 19:18:07.642850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.269 [2024-07-25 19:18:07.642877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.269 [2024-07-25 19:18:07.642893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.269 [2024-07-25 19:18:07.643149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.269 [2024-07-25 19:18:07.643367] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.269 [2024-07-25 19:18:07.643388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.269 [2024-07-25 19:18:07.643402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.269 [2024-07-25 19:18:07.646597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.269 [2024-07-25 19:18:07.655942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.269 [2024-07-25 19:18:07.656368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.269 [2024-07-25 19:18:07.656396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.269 [2024-07-25 19:18:07.656411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.269 [2024-07-25 19:18:07.656637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.269 [2024-07-25 19:18:07.656842] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.269 [2024-07-25 19:18:07.656861] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.269 [2024-07-25 19:18:07.656874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.269 [2024-07-25 19:18:07.660022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.269 [2024-07-25 19:18:07.669443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.269 [2024-07-25 19:18:07.669908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.269 [2024-07-25 19:18:07.669935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.269 [2024-07-25 19:18:07.669951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.269 [2024-07-25 19:18:07.670220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.269 [2024-07-25 19:18:07.670446] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.269 [2024-07-25 19:18:07.670466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.269 [2024-07-25 19:18:07.670478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.269 [2024-07-25 19:18:07.673700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.269 [2024-07-25 19:18:07.682878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.269 [2024-07-25 19:18:07.683263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.269 [2024-07-25 19:18:07.683291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.269 [2024-07-25 19:18:07.683307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.269 [2024-07-25 19:18:07.683535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.269 [2024-07-25 19:18:07.683756] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.269 [2024-07-25 19:18:07.683775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.269 [2024-07-25 19:18:07.683788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.269 [2024-07-25 19:18:07.686961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.269 [2024-07-25 19:18:07.696475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.269 [2024-07-25 19:18:07.696923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.269 [2024-07-25 19:18:07.696951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.269 [2024-07-25 19:18:07.696967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.269 [2024-07-25 19:18:07.697212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.269 [2024-07-25 19:18:07.697439] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.269 [2024-07-25 19:18:07.697458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.269 [2024-07-25 19:18:07.697471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.269 [2024-07-25 19:18:07.700637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.269 [2024-07-25 19:18:07.709882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.269 [2024-07-25 19:18:07.710290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.269 [2024-07-25 19:18:07.710318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.269 [2024-07-25 19:18:07.710334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.269 [2024-07-25 19:18:07.710576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.269 [2024-07-25 19:18:07.710782] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.269 [2024-07-25 19:18:07.710801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.269 [2024-07-25 19:18:07.710815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.269 [2024-07-25 19:18:07.713963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.269 [2024-07-25 19:18:07.723325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.269 [2024-07-25 19:18:07.723766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.269 [2024-07-25 19:18:07.723794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.269 [2024-07-25 19:18:07.723811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.269 [2024-07-25 19:18:07.724052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.269 [2024-07-25 19:18:07.724302] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.269 [2024-07-25 19:18:07.724323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.269 [2024-07-25 19:18:07.724338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.269 [2024-07-25 19:18:07.727521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.269 [2024-07-25 19:18:07.737005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.269 [2024-07-25 19:18:07.737444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.269 [2024-07-25 19:18:07.737473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.269 [2024-07-25 19:18:07.737489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.528 [2024-07-25 19:18:07.737703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.528 [2024-07-25 19:18:07.737938] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.528 [2024-07-25 19:18:07.737959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.528 [2024-07-25 19:18:07.737977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.528 [2024-07-25 19:18:07.741207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.528 [2024-07-25 19:18:07.750654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.528 [2024-07-25 19:18:07.751034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.528 [2024-07-25 19:18:07.751069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.528 [2024-07-25 19:18:07.751085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.528 [2024-07-25 19:18:07.751320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.528 [2024-07-25 19:18:07.751544] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.528 [2024-07-25 19:18:07.751564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.528 [2024-07-25 19:18:07.751576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.528 [2024-07-25 19:18:07.754733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.528 [2024-07-25 19:18:07.764095] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.528 [2024-07-25 19:18:07.764532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.528 [2024-07-25 19:18:07.764560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.528 [2024-07-25 19:18:07.764575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.528 [2024-07-25 19:18:07.764816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.528 [2024-07-25 19:18:07.765023] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.528 [2024-07-25 19:18:07.765042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.528 [2024-07-25 19:18:07.765055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.528 [2024-07-25 19:18:07.768272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.528 [2024-07-25 19:18:07.777676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.528 [2024-07-25 19:18:07.778149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.528 [2024-07-25 19:18:07.778177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.528 [2024-07-25 19:18:07.778193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.528 [2024-07-25 19:18:07.778421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.528 [2024-07-25 19:18:07.778642] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.528 [2024-07-25 19:18:07.778666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.528 [2024-07-25 19:18:07.778680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.528 [2024-07-25 19:18:07.781821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.528 [2024-07-25 19:18:07.791235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.528 [2024-07-25 19:18:07.791660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.528 [2024-07-25 19:18:07.791693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.528 [2024-07-25 19:18:07.791710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.528 [2024-07-25 19:18:07.791952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.528 [2024-07-25 19:18:07.792195] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.528 [2024-07-25 19:18:07.792218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.528 [2024-07-25 19:18:07.792231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.528 [2024-07-25 19:18:07.795522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.528 [2024-07-25 19:18:07.804822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.528 [2024-07-25 19:18:07.805214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.528 [2024-07-25 19:18:07.805243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.528 [2024-07-25 19:18:07.805259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.528 [2024-07-25 19:18:07.805488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.528 [2024-07-25 19:18:07.805693] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.528 [2024-07-25 19:18:07.805712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.528 [2024-07-25 19:18:07.805725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.528 [2024-07-25 19:18:07.808954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.528 [2024-07-25 19:18:07.818391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.528 [2024-07-25 19:18:07.818803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.528 [2024-07-25 19:18:07.818831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.528 [2024-07-25 19:18:07.818846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.528 [2024-07-25 19:18:07.819062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.528 [2024-07-25 19:18:07.819317] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.528 [2024-07-25 19:18:07.819339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.528 [2024-07-25 19:18:07.819352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.528 [2024-07-25 19:18:07.822632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.528 [2024-07-25 19:18:07.831950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.528 [2024-07-25 19:18:07.832366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.528 [2024-07-25 19:18:07.832394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.528 [2024-07-25 19:18:07.832410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.529 [2024-07-25 19:18:07.832647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.529 [2024-07-25 19:18:07.832858] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.529 [2024-07-25 19:18:07.832878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.529 [2024-07-25 19:18:07.832891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.529 [2024-07-25 19:18:07.836208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.529 [2024-07-25 19:18:07.845652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.529 [2024-07-25 19:18:07.846070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.529 [2024-07-25 19:18:07.846097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.529 [2024-07-25 19:18:07.846121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.529 [2024-07-25 19:18:07.846336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.529 [2024-07-25 19:18:07.846575] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.529 [2024-07-25 19:18:07.846595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.529 [2024-07-25 19:18:07.846608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.529 [2024-07-25 19:18:07.849752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.529 [2024-07-25 19:18:07.859166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.529 [2024-07-25 19:18:07.859585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.529 [2024-07-25 19:18:07.859613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.529 [2024-07-25 19:18:07.859629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.529 [2024-07-25 19:18:07.859874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.529 [2024-07-25 19:18:07.860079] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.529 [2024-07-25 19:18:07.860127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.529 [2024-07-25 19:18:07.860141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.529 [2024-07-25 19:18:07.863323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.529 [2024-07-25 19:18:07.872754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.529 [2024-07-25 19:18:07.873223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.529 [2024-07-25 19:18:07.873250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.529 [2024-07-25 19:18:07.873266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.529 [2024-07-25 19:18:07.873503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.529 [2024-07-25 19:18:07.873724] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.529 [2024-07-25 19:18:07.873743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.529 [2024-07-25 19:18:07.873755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.529 [2024-07-25 19:18:07.876937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.529 [2024-07-25 19:18:07.886211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.529 [2024-07-25 19:18:07.886651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.529 [2024-07-25 19:18:07.886679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.529 [2024-07-25 19:18:07.886695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.529 [2024-07-25 19:18:07.886935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.529 [2024-07-25 19:18:07.887167] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.529 [2024-07-25 19:18:07.887189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.529 [2024-07-25 19:18:07.887202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.529 [2024-07-25 19:18:07.890392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.529 [2024-07-25 19:18:07.899751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.529 [2024-07-25 19:18:07.900205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.529 [2024-07-25 19:18:07.900232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.529 [2024-07-25 19:18:07.900249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.529 [2024-07-25 19:18:07.900491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.529 [2024-07-25 19:18:07.900696] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.529 [2024-07-25 19:18:07.900716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.529 [2024-07-25 19:18:07.900729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.529 [2024-07-25 19:18:07.903869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.529 [2024-07-25 19:18:07.913292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.529 [2024-07-25 19:18:07.913755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.529 [2024-07-25 19:18:07.913782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.529 [2024-07-25 19:18:07.913798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.529 [2024-07-25 19:18:07.914027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.529 [2024-07-25 19:18:07.914276] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.529 [2024-07-25 19:18:07.914297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.529 [2024-07-25 19:18:07.914310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.529 [2024-07-25 19:18:07.917574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.529 [2024-07-25 19:18:07.926921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.529 [2024-07-25 19:18:07.927336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.529 [2024-07-25 19:18:07.927364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.529 [2024-07-25 19:18:07.927385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.529 [2024-07-25 19:18:07.927629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.529 [2024-07-25 19:18:07.927834] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.529 [2024-07-25 19:18:07.927853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.529 [2024-07-25 19:18:07.927866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.529 [2024-07-25 19:18:07.931116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.529 [2024-07-25 19:18:07.940570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.529 [2024-07-25 19:18:07.940943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.529 [2024-07-25 19:18:07.940985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.529 [2024-07-25 19:18:07.941001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.529 [2024-07-25 19:18:07.941245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.529 [2024-07-25 19:18:07.941492] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.529 [2024-07-25 19:18:07.941512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.529 [2024-07-25 19:18:07.941541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.529 [2024-07-25 19:18:07.944810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.529 [2024-07-25 19:18:07.954011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.529 [2024-07-25 19:18:07.954440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.529 [2024-07-25 19:18:07.954468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.529 [2024-07-25 19:18:07.954483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.529 [2024-07-25 19:18:07.954726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.529 [2024-07-25 19:18:07.954930] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.529 [2024-07-25 19:18:07.954950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.529 [2024-07-25 19:18:07.954962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.529 [2024-07-25 19:18:07.958132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.529 [2024-07-25 19:18:07.967498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.529 [2024-07-25 19:18:07.967921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.529 [2024-07-25 19:18:07.967948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.529 [2024-07-25 19:18:07.967964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.530 [2024-07-25 19:18:07.968199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.530 [2024-07-25 19:18:07.968425] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.530 [2024-07-25 19:18:07.968449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.530 [2024-07-25 19:18:07.968463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.530 [2024-07-25 19:18:07.971680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.530 [2024-07-25 19:18:07.981016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.530 [2024-07-25 19:18:07.981446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.530 [2024-07-25 19:18:07.981474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.530 [2024-07-25 19:18:07.981490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.530 [2024-07-25 19:18:07.981733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.530 [2024-07-25 19:18:07.981938] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.530 [2024-07-25 19:18:07.981957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.530 [2024-07-25 19:18:07.981970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.530 [2024-07-25 19:18:07.985198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.530 [2024-07-25 19:18:07.994538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.530 [2024-07-25 19:18:07.995015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.530 [2024-07-25 19:18:07.995043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.530 [2024-07-25 19:18:07.995059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.530 [2024-07-25 19:18:07.995280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.530 [2024-07-25 19:18:07.995530] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.530 [2024-07-25 19:18:07.995550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.530 [2024-07-25 19:18:07.995562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.788 [2024-07-25 19:18:07.998876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.788 [2024-07-25 19:18:08.007984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.788 [2024-07-25 19:18:08.008395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.788 [2024-07-25 19:18:08.008423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.788 [2024-07-25 19:18:08.008439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.788 [2024-07-25 19:18:08.008681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.788 [2024-07-25 19:18:08.008886] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.788 [2024-07-25 19:18:08.008906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.788 [2024-07-25 19:18:08.008919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.788 [2024-07-25 19:18:08.012117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.788 [2024-07-25 19:18:08.021489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.788 [2024-07-25 19:18:08.021834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.788 [2024-07-25 19:18:08.021860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.788 [2024-07-25 19:18:08.021875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.788 [2024-07-25 19:18:08.022096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.788 [2024-07-25 19:18:08.022331] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.788 [2024-07-25 19:18:08.022351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.788 [2024-07-25 19:18:08.022364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.788 [2024-07-25 19:18:08.025574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.788 [2024-07-25 19:18:08.034907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.788 [2024-07-25 19:18:08.035325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.788 [2024-07-25 19:18:08.035353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.788 [2024-07-25 19:18:08.035370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.788 [2024-07-25 19:18:08.035613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.789 [2024-07-25 19:18:08.035819] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.789 [2024-07-25 19:18:08.035849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.789 [2024-07-25 19:18:08.035862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.789 [2024-07-25 19:18:08.039000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.789 [2024-07-25 19:18:08.048442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.789 [2024-07-25 19:18:08.048938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.789 [2024-07-25 19:18:08.048966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.789 [2024-07-25 19:18:08.048982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.789 [2024-07-25 19:18:08.049206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.789 [2024-07-25 19:18:08.049447] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.789 [2024-07-25 19:18:08.049468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.789 [2024-07-25 19:18:08.049481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.789 [2024-07-25 19:18:08.052646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.789 [2024-07-25 19:18:08.061947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.789 [2024-07-25 19:18:08.062380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.789 [2024-07-25 19:18:08.062408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.789 [2024-07-25 19:18:08.062436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.789 [2024-07-25 19:18:08.062678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.789 [2024-07-25 19:18:08.062883] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.789 [2024-07-25 19:18:08.062903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.789 [2024-07-25 19:18:08.062916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.789 [2024-07-25 19:18:08.066093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.789 [2024-07-25 19:18:08.075484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.789 [2024-07-25 19:18:08.075885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.789 [2024-07-25 19:18:08.075913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.789 [2024-07-25 19:18:08.075929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.789 [2024-07-25 19:18:08.076194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.789 [2024-07-25 19:18:08.076431] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.789 [2024-07-25 19:18:08.076452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.789 [2024-07-25 19:18:08.076465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.789 [2024-07-25 19:18:08.079691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.789 [2024-07-25 19:18:08.088961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.789 [2024-07-25 19:18:08.089388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.789 [2024-07-25 19:18:08.089420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.789 [2024-07-25 19:18:08.089436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.789 [2024-07-25 19:18:08.089676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.789 [2024-07-25 19:18:08.089881] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.789 [2024-07-25 19:18:08.089901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.789 [2024-07-25 19:18:08.089915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.789 [2024-07-25 19:18:08.093113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.789 [2024-07-25 19:18:08.102538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.789 [2024-07-25 19:18:08.102981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.789 [2024-07-25 19:18:08.103009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.789 [2024-07-25 19:18:08.103025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.789 [2024-07-25 19:18:08.103260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.789 [2024-07-25 19:18:08.103485] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.789 [2024-07-25 19:18:08.103510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.789 [2024-07-25 19:18:08.103523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.789 [2024-07-25 19:18:08.106753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.789 [2024-07-25 19:18:08.115917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.789 [2024-07-25 19:18:08.116310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.789 [2024-07-25 19:18:08.116338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.789 [2024-07-25 19:18:08.116355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.789 [2024-07-25 19:18:08.116597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.789 [2024-07-25 19:18:08.116803] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.789 [2024-07-25 19:18:08.116822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.789 [2024-07-25 19:18:08.116835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.789 [2024-07-25 19:18:08.120006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.789 [2024-07-25 19:18:08.129379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.789 [2024-07-25 19:18:08.129831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.789 [2024-07-25 19:18:08.129859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.789 [2024-07-25 19:18:08.129876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.789 [2024-07-25 19:18:08.130124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.789 [2024-07-25 19:18:08.130350] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.789 [2024-07-25 19:18:08.130371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.789 [2024-07-25 19:18:08.130384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.789 [2024-07-25 19:18:08.133573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.789 [2024-07-25 19:18:08.142939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.789 [2024-07-25 19:18:08.143349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.789 [2024-07-25 19:18:08.143376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.789 [2024-07-25 19:18:08.143392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.789 [2024-07-25 19:18:08.143635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.789 [2024-07-25 19:18:08.143840] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.789 [2024-07-25 19:18:08.143860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.789 [2024-07-25 19:18:08.143873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.789 [2024-07-25 19:18:08.147091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.789 [2024-07-25 19:18:08.156459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.789 [2024-07-25 19:18:08.156854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.789 [2024-07-25 19:18:08.156882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.789 [2024-07-25 19:18:08.156899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.789 [2024-07-25 19:18:08.157155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.789 [2024-07-25 19:18:08.157375] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.789 [2024-07-25 19:18:08.157395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.789 [2024-07-25 19:18:08.157424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.789 [2024-07-25 19:18:08.160579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.789 [2024-07-25 19:18:08.169979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.789 [2024-07-25 19:18:08.170399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.789 [2024-07-25 19:18:08.170427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.789 [2024-07-25 19:18:08.170443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.790 [2024-07-25 19:18:08.170658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.790 [2024-07-25 19:18:08.170885] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.790 [2024-07-25 19:18:08.170905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.790 [2024-07-25 19:18:08.170918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.790 [2024-07-25 19:18:08.174180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.790 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:15.790 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:27:15.790 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:15.790 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:15.790 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:15.790 [2024-07-25 19:18:08.183574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.790 [2024-07-25 19:18:08.184021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.790 [2024-07-25 19:18:08.184049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.790 [2024-07-25 19:18:08.184065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.790 [2024-07-25 19:18:08.184287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.790 [2024-07-25 19:18:08.184534] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.790 [2024-07-25 19:18:08.184554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.790 [2024-07-25 19:18:08.184567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.790 [2024-07-25 19:18:08.187807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.790 [2024-07-25 19:18:08.197192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.790 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:15.790 [2024-07-25 19:18:08.197616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.790 [2024-07-25 19:18:08.197645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.790 [2024-07-25 19:18:08.197661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.790 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:15.790 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.790 [2024-07-25 19:18:08.197877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.790 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:15.790 [2024-07-25 19:18:08.198131] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.790 [2024-07-25 19:18:08.198154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.790 [2024-07-25 19:18:08.198168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.790 [2024-07-25 19:18:08.201415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.790 [2024-07-25 19:18:08.202439] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:15.790 [2024-07-25 19:18:08.210686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.790 [2024-07-25 19:18:08.211083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.790 [2024-07-25 19:18:08.211123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.790 [2024-07-25 19:18:08.211140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.790 [2024-07-25 19:18:08.211355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.790 [2024-07-25 19:18:08.211576] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.790 [2024-07-25 19:18:08.211596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.790 [2024-07-25 19:18:08.211609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.790 [2024-07-25 19:18:08.214841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.790 [2024-07-25 19:18:08.224270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.790 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.790 [2024-07-25 19:18:08.224677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.790 [2024-07-25 19:18:08.224704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.790 [2024-07-25 19:18:08.224721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.790 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:15.790 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.790 [2024-07-25 19:18:08.224959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.790 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:15.790 [2024-07-25 19:18:08.225208] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.790 [2024-07-25 19:18:08.225235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.790 [2024-07-25 19:18:08.225250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.790 [2024-07-25 19:18:08.228528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.790 [2024-07-25 19:18:08.237930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.790 [2024-07-25 19:18:08.238514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.790 [2024-07-25 19:18:08.238562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.790 [2024-07-25 19:18:08.238580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.790 [2024-07-25 19:18:08.238825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.790 [2024-07-25 19:18:08.239033] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.790 [2024-07-25 19:18:08.239053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.790 [2024-07-25 19:18:08.239068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.790 [2024-07-25 19:18:08.242349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.790 Malloc0 00:27:15.790 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.790 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:15.790 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.790 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:15.790 [2024-07-25 19:18:08.251734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.790 [2024-07-25 19:18:08.252291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.790 [2024-07-25 19:18:08.252321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:15.790 [2024-07-25 19:18:08.252340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:15.790 [2024-07-25 19:18:08.252591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:15.790 [2024-07-25 19:18:08.252797] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.790 [2024-07-25 19:18:08.252817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.790 [2024-07-25 19:18:08.252832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.790 [2024-07-25 19:18:08.256093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.790 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.790 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:15.790 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.790 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.049 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.049 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:16.049 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.049 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.049 [2024-07-25 19:18:08.265457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.049 [2024-07-25 19:18:08.265880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.049 [2024-07-25 19:18:08.265907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892ac0 with addr=10.0.0.2, port=4420 00:27:16.049 [2024-07-25 19:18:08.265923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892ac0 is same with the state(5) to be set 00:27:16.049 [2024-07-25 19:18:08.266148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892ac0 (9): Bad file descriptor 00:27:16.049 [2024-07-25 19:18:08.266367] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.049 [2024-07-25 19:18:08.266402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.049 [2024-07-25 19:18:08.266422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.049 [2024-07-25 19:18:08.268495] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:16.049 [2024-07-25 19:18:08.269686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.049 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.049 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1014233 00:27:16.049 [2024-07-25 19:18:08.278985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.049 [2024-07-25 19:18:08.357082] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:26.028 00:27:26.028 Latency(us) 00:27:26.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.028 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:26.028 Verification LBA range: start 0x0 length 0x4000 00:27:26.028 Nvme1n1 : 15.01 6352.59 24.81 10397.49 0.00 7616.41 825.27 19612.25 00:27:26.028 =================================================================================================================== 00:27:26.028 Total : 6352.59 24.81 10397.49 0.00 7616.41 825.27 19612.25 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:26.028 rmmod nvme_tcp 00:27:26.028 rmmod nvme_fabrics 00:27:26.028 rmmod nvme_keyring 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1014939 ']' 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1014939 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1014939 ']' 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1014939 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1014939 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1014939' 00:27:26.028 killing process with pid 1014939 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1014939 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1014939 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:26.028 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:27.402 00:27:27.402 real 0m23.927s 00:27:27.402 user 1m3.770s 00:27:27.402 sys 0m4.620s 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:27.402 ************************************ 00:27:27.402 END TEST nvmf_bdevperf 00:27:27.402 ************************************ 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.402 ************************************ 00:27:27.402 START TEST nvmf_target_disconnect 00:27:27.402 ************************************ 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:27.402 * Looking for test storage... 00:27:27.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.402 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:27:27.403 19:18:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:29.988 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:29.988 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:27:29.988 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:29.988 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:29.988 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:29.988 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:29.988 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:29.988 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:27:29.988 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:29.988 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:27:29.988 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:27:29.988 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:27:29.988 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:27:29.988 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:27:29.988 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:27:29.988 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:29.988 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:29.988 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:29.988 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:29.988 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:29.988 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:29.988 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:29.988 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:29.989 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:29.989 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:29.989 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:29.989 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:29.989 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:29.989 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:29.989 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:29.989 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:29.989 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:29.989 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:29.989 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:29.989 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:29.989 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:29.989 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:29.989 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.989 19:18:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:29.989 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:29.989 Found net devices under 0000:09:00.0: cvl_0_0 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:29.989 Found net devices under 0000:09:00.1: cvl_0_1 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:29.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:29.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:27:29.989 00:27:29.989 --- 10.0.0.2 ping statistics --- 00:27:29.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.989 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:29.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:29.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:27:29.989 00:27:29.989 --- 10.0.0.1 ping statistics --- 00:27:29.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.989 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:29.989 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:29.989 ************************************ 00:27:29.989 START TEST nvmf_target_disconnect_tc1 00:27:29.989 ************************************ 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:29.990 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.990 [2024-07-25 19:18:22.296420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 19:18:22.296489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb771a0 with addr=10.0.0.2, port=4420 00:27:29.990 [2024-07-25 19:18:22.296529] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:29.990 [2024-07-25 19:18:22.296555] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:29.990 [2024-07-25 19:18:22.296570] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:27:29.990 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:29.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:29.990 Initializing NVMe Controllers 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:29.990 00:27:29.990 real 0m0.109s 00:27:29.990 user 0m0.042s 00:27:29.990 sys 0m0.067s 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:29.990 ************************************ 00:27:29.990 END TEST nvmf_target_disconnect_tc1 00:27:29.990 ************************************ 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:29.990 ************************************ 00:27:29.990 START TEST nvmf_target_disconnect_tc2 00:27:29.990 ************************************ 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1018890 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1018890 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1018890 ']' 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:29.990 19:18:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:29.990 [2024-07-25 19:18:22.414954] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:29.990 [2024-07-25 19:18:22.415054] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.990 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.248 [2024-07-25 19:18:22.495720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:30.248 [2024-07-25 19:18:22.621216] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:30.248 [2024-07-25 19:18:22.621279] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:30.248 [2024-07-25 19:18:22.621297] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:30.248 [2024-07-25 19:18:22.621310] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:30.248 [2024-07-25 19:18:22.621322] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:30.248 [2024-07-25 19:18:22.621413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:30.248 [2024-07-25 19:18:22.621465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:30.248 [2024-07-25 19:18:22.621544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:30.248 [2024-07-25 19:18:22.621552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:31.178 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:31.178 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:27:31.178 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:31.178 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:31.178 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.178 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:31.178 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:31.178 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.178 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.178 Malloc0 00:27:31.178 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.178 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:31.178 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.178 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.178 [2024-07-25 19:18:23.416321] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.178 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.178 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:31.178 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.178 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.178 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.178 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:31.178 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.179 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.179 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.179 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:31.179 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.179 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.179 [2024-07-25 19:18:23.444583] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.179 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.179 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:31.179 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.179 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.179 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.179 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1019050 00:27:31.179 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:31.179 19:18:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:31.179 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.136 19:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1018890 00:27:33.136 19:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Write completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Write completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Write completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Write completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Write completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Write completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Write completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Write completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Write completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Write completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Write completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Write completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 [2024-07-25 19:18:25.469837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Write completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Write completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Write completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Write completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Write completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Write completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Write completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Write completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 [2024-07-25 19:18:25.470191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.136 starting I/O failed 00:27:33.136 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Write completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Write completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Write completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Write completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Write completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Write completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Write completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Write completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Write completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Write completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Write completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 [2024-07-25 19:18:25.470527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Write completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Write completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Write completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Write completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Write completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Write completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Write completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Write completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Write completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Write completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Write completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Read completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 Write completed with error (sct=0, sc=8) 00:27:33.137 starting I/O failed 00:27:33.137 [2024-07-25 19:18:25.470859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.137 [2024-07-25 19:18:25.471123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-07-25 19:18:25.471165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-07-25 19:18:25.471323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-07-25 19:18:25.471351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-07-25 19:18:25.471573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-07-25 19:18:25.471600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-07-25 19:18:25.471813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-07-25 19:18:25.471856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-07-25 19:18:25.472157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-07-25 19:18:25.472184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-07-25 19:18:25.472332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-07-25 19:18:25.472359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-07-25 19:18:25.472520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-07-25 19:18:25.472567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-07-25 19:18:25.472758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-07-25 19:18:25.472803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-07-25 19:18:25.473034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-07-25 19:18:25.473063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-07-25 19:18:25.473247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-07-25 19:18:25.473276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-07-25 19:18:25.473452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-07-25 19:18:25.473484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-07-25 19:18:25.473703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-07-25 19:18:25.473738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-07-25 19:18:25.474017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-07-25 19:18:25.474071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-07-25 19:18:25.474263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-07-25 19:18:25.474290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-07-25 19:18:25.474474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-07-25 19:18:25.474501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-07-25 19:18:25.474668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-07-25 19:18:25.474693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.137 qpair failed and we were unable to recover it. 00:27:33.137 [2024-07-25 19:18:25.475017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.475060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.475244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.475271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.475446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.475473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.475656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.475686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.475865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.475895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.476113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.476159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.476317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.476345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.476530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.476556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.476699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.476742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.477037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.477082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.477293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.477321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.477510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.477554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.477928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.477981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.478268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.478295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.478467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.478492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.478693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.478718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.479013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.479070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.479259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.479296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.479444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.479470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.479617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.479656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.479866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.479892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.480046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.480072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.480277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.480323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.480505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.480533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.480758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.480785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.480936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.480963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.481176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.481217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.481402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.481430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.481600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.481626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.482335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.482363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.482568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.482594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.482774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.482817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.483117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.483144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.483292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.483320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.483532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.483558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.483724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.483749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.483983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.484008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.484200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-07-25 19:18:25.484228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.138 qpair failed and we were unable to recover it. 00:27:33.138 [2024-07-25 19:18:25.484406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.484432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.484605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.484635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.484914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.484976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.485182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.485209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.485397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.485423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.485662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.485691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.485946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.485974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.486174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.486201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.486395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.486421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.486597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.486624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.486802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.486828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.487059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.487085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.487245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.487272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.487458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.487484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.487826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.487874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.488107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.488134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.488278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.488305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.488507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.488533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.488725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.488751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.488961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.488987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.489190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.489217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.489370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.489397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.489600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.489630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.489919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.489945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.490130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.490161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.490354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.490381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.490573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.490604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.490815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.490841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.491032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.491062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.491288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.491314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.491502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.491528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.491710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.491735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.491934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.491964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.492175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.492203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.492414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.492440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.492633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.492658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.492831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.492860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.139 qpair failed and we were unable to recover it. 00:27:33.139 [2024-07-25 19:18:25.493083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.139 [2024-07-25 19:18:25.493116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.493271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.493311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.493640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.493697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.493907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.493933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.494075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.494106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.494255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.494283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.494447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.494472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.494654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.494680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.494855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.494881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.495160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.495186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.495401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.495444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.495633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.495659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.495865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.495891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.496075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.496106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.496322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.496362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.496571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.496616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.496901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.496957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.497188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.497216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.497389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.497416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.497579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.497605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.497775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.497802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.497966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.497993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.498142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.498169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.498321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.498347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.498588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.498615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.498790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.498816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.498962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.498988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.499168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.499202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.499381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.499408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.499598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.499624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.499820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.499847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.500019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.500045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.500243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.500288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.500457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.500483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.500678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.500704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.500905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.500932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.501094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.501126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.140 qpair failed and we were unable to recover it. 00:27:33.140 [2024-07-25 19:18:25.501303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.140 [2024-07-25 19:18:25.501329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.501527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.501553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.501733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.501760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.501915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.501942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.502121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.502148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.502323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.502351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.502529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.502556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.502759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.502785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.502938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.502965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.503193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.503237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.503459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.503486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.503664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.503691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.503868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.503895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.504092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.504124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.504321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.504365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.504558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.504603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.504873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.504900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.505084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.505116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.505318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.505345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.505556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.505582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.505755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.505783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.505972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.505998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.506177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.506204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.506349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.506376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.506541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.506568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.506718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.506746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.506946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.506972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.141 [2024-07-25 19:18:25.507146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.141 [2024-07-25 19:18:25.507174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.141 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.507345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.507389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.507590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.507634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.507783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.507814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.507985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.508013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.508234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.508279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.508468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.508497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.508742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.508769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.508964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.508991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.509184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.509211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.509396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.509424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.509621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.509667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.509839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.509868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.510052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.510081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.510286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.510315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.510496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.510527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.510726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.510752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.510938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.510967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.511143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.511176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.511389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.511433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.511641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.511668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.511836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.511864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.512013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.512043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.512198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.512226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.512373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.512402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.512568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.512601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.512771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.512798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.513005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.513033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.513239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.513267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.513412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.513439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.513641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.513669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.513848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.513876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.514033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.514067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.514266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.514296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.514447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.514473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.514655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.514683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.514871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.514899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.142 [2024-07-25 19:18:25.515076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.142 [2024-07-25 19:18:25.515111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.142 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.515274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.515302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.515473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.515503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.515712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.515761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.515925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.515953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.516152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.516180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.516359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.516392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.516534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.516567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.516745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.516772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.516950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.516978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.517183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.517211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.517414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.517441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.517619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.517651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.517826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.517854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.518020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.518047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.518249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.518298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.518479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.518506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.518689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.518717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.518868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.518900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.519062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.519090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.519305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.519333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.519550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.519596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.519823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.519853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.520075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.520111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.520289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.520334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.520565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.520593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.520782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.520828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.521006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.521037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.521302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.521331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.521527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.521555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.521729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.521758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.521946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.521974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.522148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.522177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.522359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.522398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.522627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.522655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.522827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.522854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.523029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.143 [2024-07-25 19:18:25.523055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.143 qpair failed and we were unable to recover it. 00:27:33.143 [2024-07-25 19:18:25.523201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.523228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.523420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.523449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.523726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.523779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.523968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.523998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.524207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.524235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.524391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.524419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.524583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.524614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.524829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.524858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.525047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.525074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.525230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.525262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.525433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.525459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.525609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.525635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.525871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.525898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.526094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.526125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.526295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.526321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.526495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.526520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.526694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.526720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.526892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.526918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.527144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.527171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.527321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.527349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.527551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.527578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.527776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.527805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.528002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.528029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.528224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.528250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.528422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.528449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.528601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.528628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.528795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.528821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.529000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.529026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.529170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.529197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.529345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.529388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.529574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.529603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.529823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.529850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.530023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.530050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.530247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.530273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.530419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.530447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.530726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.530780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.530968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.531002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.144 [2024-07-25 19:18:25.531213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.144 [2024-07-25 19:18:25.531239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.144 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.531413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.531441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.531662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.531691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.531870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.531899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.532109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.532139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.532303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.532329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.532501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.532527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.532679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.532706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.532859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.532904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.533092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.533147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.533321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.533349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.533496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.533523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.533717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.533743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.533919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.533945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.534085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.534117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.534290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.534316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.534506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.534532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.534747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.534776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.534961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.534990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.535174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.535201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.535357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.535383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.535572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.535600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.535819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.535848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.536035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.536064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.536266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.536292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.536440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.536467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.536643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.536673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.536864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.536890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.537044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.537070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.537243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.537270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.537458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.537488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.537769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.145 [2024-07-25 19:18:25.537831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.145 qpair failed and we were unable to recover it. 00:27:33.145 [2024-07-25 19:18:25.538063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.538089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.538294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.538321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.538473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.538499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.538662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.538693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.538913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.538940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.539111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.539137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.539283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.539310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.539533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.539567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.539761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.539787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.539959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.539985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.540173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.540200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.540351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.540379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.540568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.540597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.540784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.540814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.541013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.541039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.541193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.541220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.541437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.541466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.541659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.541686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.541902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.541932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.542124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.542151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.542297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.542325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.542561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.542591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.542805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.542833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.543019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.543045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.543201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.543228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.543425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.543451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.543623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.543649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.543861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.543889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.544068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.544095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.544270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.544298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.544506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.544535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.544688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.544717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.544900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.544927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.545080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.545120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.545299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.545326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.545501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.545528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.545703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.545729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.146 [2024-07-25 19:18:25.545895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.146 [2024-07-25 19:18:25.545921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.146 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.546124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.546151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.546308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.546335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.546512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.546538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.546718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.546745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.546932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.546962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.547116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.547146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.547365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.547390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.547616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.547642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.547816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.547843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.548042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.548073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.548309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.548338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.548551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.548580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.548804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.548830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.548996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.549022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.549173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.549200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.549347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.549373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.549522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.549548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.549746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.549771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.549941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.549968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.550148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.550179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.550398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.550424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.550615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.550641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.550861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.550890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.551082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.551116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.551335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.551361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.551538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.551565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.551740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.551767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.551960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.551986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.552139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.552166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.552314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.552341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.552513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.552541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.552749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.552778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.552964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.552994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.553215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.553242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.553441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.553468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.553662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.147 [2024-07-25 19:18:25.553691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.147 qpair failed and we were unable to recover it. 00:27:33.147 [2024-07-25 19:18:25.553877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.553903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.554079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.554110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.554279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.554306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.554504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.554530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.554687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.554716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.554907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.554938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.555130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.555156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.555314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.555345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.555535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.555564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.555754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.555781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.555973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.556003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.556200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.556230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.556409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.556435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.556606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.556640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.556812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.556843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.557064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.557090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.557314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.557343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.557540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.557566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.557759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.557785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.557958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.557985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.558178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.558209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.558429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.558456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.558601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.558628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.558822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.558848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.559003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.559029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.559227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.559253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.559419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.559445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.559616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.559643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.559813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.559839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.560001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.560031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.560229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.560256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.560405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.560432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.560626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.560652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.560840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.560868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.561060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.561090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.561306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.561333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.561499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.561525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.148 [2024-07-25 19:18:25.561714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.148 [2024-07-25 19:18:25.561743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.148 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.561939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.561966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.562139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.562166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.562321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.562347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.562541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.562567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.562735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.562762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.562945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.562973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.563167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.563197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.563386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.563413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.563592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.563618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.563818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.563844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.563994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.564020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.564191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.564219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.564399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.564428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.564622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.564648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.564843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.564869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.565033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.565064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.565215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.565241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.565441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.565467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.565658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.565686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.565832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.565858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.566075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.566110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.566284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.566310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.566504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.566531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.566704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.566731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.566878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.566903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.567113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.567140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.567286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.567330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.567524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.567553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.567741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.567767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.567953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.567979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.568120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.568146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.568319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.568346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.149 [2024-07-25 19:18:25.568525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.149 [2024-07-25 19:18:25.568551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.149 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.568728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.568754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.568928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.568954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.569146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.569176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.569373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.569399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.569566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.569591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.569783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.569812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.570003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.570031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.570207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.570234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.570428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.570454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.570662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.570691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.570877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.570903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.571098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.571129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.571264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.571290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.571436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.571464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.571687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.571715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.571905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.571931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.572109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.572137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.572310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.572336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.572512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.572538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.572689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.572716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.572913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.572939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.573142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.573171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.573369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.573400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.573570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.573597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.573788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.573819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.574036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.574062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.150 [2024-07-25 19:18:25.574248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.150 [2024-07-25 19:18:25.574274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.150 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.574423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.574450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.574643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.574669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.574866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.574894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.575114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.575143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.575354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.575381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.575573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.575599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.575773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.575801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.575997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.576023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.576173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.576200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.576357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.576401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.576591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.576617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.576779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.576809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.577002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.577032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.577229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.577256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.577412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.577439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.577583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.577609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.577779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.577810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.578004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.578033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.578250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.578277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.578423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.578450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.578643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.578674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.578855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.578884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.579075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.579107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.579288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.579315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.579523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.579552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.579739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.579766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.579973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.580013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.580237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.580265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.580417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.580443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.580618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.580644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.580840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.580866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.581016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.581043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.581244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.581275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.151 qpair failed and we were unable to recover it. 00:27:33.151 [2024-07-25 19:18:25.581483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.151 [2024-07-25 19:18:25.581512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.581683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.581710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.581875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.581906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.582079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.582117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.582300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.582328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.582496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.582525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.582752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.582778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.582948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.582974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.583166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.583193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.583403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.583436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.583661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.583687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.583913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.583944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.584122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.584150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.584349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.584377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.584561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.584589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.584773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.584800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.585011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.585037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.585211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.585239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.585418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.585445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.585638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.585665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.585829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.585855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.586030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.586060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.586281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.586308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.586460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.586497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.586697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.586726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.586921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.586949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.587141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.587172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.587381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.587409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.587558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.587584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.587778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.587821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.587992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.588027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.152 [2024-07-25 19:18:25.588228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.152 [2024-07-25 19:18:25.588271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.152 qpair failed and we were unable to recover it. 00:27:33.431 [2024-07-25 19:18:25.588438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.431 [2024-07-25 19:18:25.588465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.431 qpair failed and we were unable to recover it. 00:27:33.431 [2024-07-25 19:18:25.588632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.431 [2024-07-25 19:18:25.588671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.431 qpair failed and we were unable to recover it. 00:27:33.431 [2024-07-25 19:18:25.588888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.431 [2024-07-25 19:18:25.588914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.431 qpair failed and we were unable to recover it. 00:27:33.431 [2024-07-25 19:18:25.589093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.431 [2024-07-25 19:18:25.589129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.431 qpair failed and we were unable to recover it. 00:27:33.431 [2024-07-25 19:18:25.589326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.431 [2024-07-25 19:18:25.589356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.431 qpair failed and we were unable to recover it. 00:27:33.431 [2024-07-25 19:18:25.589568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.431 [2024-07-25 19:18:25.589597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.431 qpair failed and we were unable to recover it. 00:27:33.431 [2024-07-25 19:18:25.589804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.431 [2024-07-25 19:18:25.589834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.431 qpair failed and we were unable to recover it. 00:27:33.431 [2024-07-25 19:18:25.590045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.431 [2024-07-25 19:18:25.590077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.431 qpair failed and we were unable to recover it. 00:27:33.431 [2024-07-25 19:18:25.590311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.431 [2024-07-25 19:18:25.590338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.431 qpair failed and we were unable to recover it. 00:27:33.431 [2024-07-25 19:18:25.590542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.431 [2024-07-25 19:18:25.590571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.431 qpair failed and we were unable to recover it. 00:27:33.431 [2024-07-25 19:18:25.590717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.590748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.590922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.590952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.591169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.591197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.591376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.591403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.591552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.591580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.591777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.591807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.592005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.592034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.592233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.592260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.592402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.592428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.592613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.592642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.592822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.592848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.593013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.593039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.593237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.593264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.593445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.593471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.593716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.593743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.593915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.593941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.594088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.594122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.594270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.594297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.594474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.594500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.594655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.594681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.594839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.594882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.595098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.595133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.595330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.595357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.595574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.595603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.595798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.595828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.596028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.596054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.596219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.596246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.596402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.596442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.596594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.596622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.596821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.596848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.597024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.597051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.597200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.597227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.597425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.597456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.432 [2024-07-25 19:18:25.597644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.432 [2024-07-25 19:18:25.597670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.432 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.597877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.597903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.598111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.598139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.598290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.598317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.598484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.598510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.598662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.598688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.598886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.598912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.599087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.599121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.599285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.599311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.599485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.599511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.599682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.599708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.599903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.599933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.600138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.600165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.600358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.600385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.600535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.600561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.600743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.600769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.600942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.600968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.601177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.601207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.601397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.601425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.601651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.601677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.601816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.601843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.602069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.602099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.602327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.602354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.602545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.602572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.602742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.602769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.603002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.603030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.603203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.603229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.603403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.603429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.603621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.603648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.603843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.603872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.604041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.604070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.604254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.604281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.604477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.604503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.604747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.604775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.604974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.605000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.605197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.605224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.605366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.605392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.433 [2024-07-25 19:18:25.605563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.433 [2024-07-25 19:18:25.605588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.433 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.605782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.605810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.605997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.606026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.606246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.606272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.606435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.606464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.606734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.606787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.606982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.607008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.607200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.607242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.607393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.607418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.607594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.607620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.607806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.607835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.608038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.608068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.608248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.608274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.608438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.608467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.608779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.608838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.609054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.609080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.609237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.609263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.609494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.609520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.609719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.609745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.609918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.609944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.610137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.610182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.610358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.610384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.610598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.610626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.610872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.610927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.611088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.611119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.611302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.611328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.611541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.611567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.611752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.611778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.611952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.611978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.612155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.612182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.612351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.612378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.434 qpair failed and we were unable to recover it. 00:27:33.434 [2024-07-25 19:18:25.612569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.434 [2024-07-25 19:18:25.612598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.612911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.612968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.613150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.613177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.613348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.613374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.613584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.613612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.613827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.613854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.614043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.614071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.614274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.614305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.614454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.614480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.614677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.614702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.614925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.614953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.615174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.615201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.615397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.615423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.615637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.615663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.615841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.615867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.616041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.616069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.616285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.616312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.616488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.616514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.616708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.616737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.616958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.616987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.617203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.617230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.617427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.617457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.617644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.617673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.617871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.617897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.618113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.618143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.618367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.618396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.618592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.618619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.618811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.618843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.619042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.619068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.619223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.619250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.619469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.619498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.619717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.619746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.619929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.435 [2024-07-25 19:18:25.619956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.435 qpair failed and we were unable to recover it. 00:27:33.435 [2024-07-25 19:18:25.620172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.620202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.620393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.620422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.620648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.620675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.620851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.620877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.621049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.621076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.621265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.621292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.621513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.621542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.621702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.621732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.621894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.621920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.622096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.622128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.622321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.622350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.622548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.622574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.622769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.622798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.623015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.623044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.623219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.623246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.623463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.623492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.623686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.623715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.623906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.623932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.624112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.624138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.624310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.624336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.624506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.624533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.624718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.624756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.625003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.625057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.625308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.625335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.625573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.625601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.625803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.625857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.626113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-25 19:18:25.626153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.436 qpair failed and we were unable to recover it. 00:27:33.436 [2024-07-25 19:18:25.626388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.626416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.626643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.626672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.626908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.626935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.627126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.627153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.627343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.627371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.627540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.627566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.627784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.627813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.627976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.628005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.628179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.628206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.628375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.628420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.628621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.628650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.628837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.628864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.629034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.629063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.629276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.629302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.629459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.629486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.629636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.629666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.629837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.629863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.630062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.630088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.630300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.630326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.630476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.630518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.630731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.630757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.630905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.630931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.631077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.631109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.631253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.631279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.631470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.631499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.631688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.631717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.631912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.631938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.632164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.632206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.632401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.632429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.632627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.632653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.632807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.632835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.633000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.633029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.633250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.633277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.633446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.633475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.633682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.633708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.437 qpair failed and we were unable to recover it. 00:27:33.437 [2024-07-25 19:18:25.633881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-25 19:18:25.633908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.634095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.634129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.634296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.634325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.634513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.634540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.634723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.634751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.634940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.634969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.635162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.635190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.635359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.635395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.635589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.635615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.635787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.635813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.635960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.635987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.636189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.636216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.636383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.636410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.636604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.636635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.636826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.636852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.637049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.637075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.637277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.637304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.637455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.637481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.637653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.637679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.637863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.637892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.638086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.638122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.638348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.638375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.638564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.638593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.638814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.638841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.639040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.639066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.639291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.639321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.639520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.639549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.639742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.639769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.639992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.640021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.640248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.640275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.640473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.640499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.640658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.640687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.640876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.640905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.641067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.438 [2024-07-25 19:18:25.641093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.438 qpair failed and we were unable to recover it. 00:27:33.438 [2024-07-25 19:18:25.641266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.641299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.641484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.641513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.641702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.641728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.641946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.641974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.642169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.642198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.642388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.642414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.642612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.642641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.642803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.642832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.643021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.643048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.643219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.643248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.643440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.643469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.643638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.643665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.643841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.643868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.644065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.644091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.644301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.644340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.644503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.644531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.644720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.644748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.644918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.644961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.645198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.645226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.645410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.645453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.645651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.645681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.645869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.645914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.646065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.646092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.646273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.646300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.646498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.646544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.646759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.646802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.647011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.647038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.647244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.647294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.647493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.647535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.647799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.647843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.648044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.648070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.648241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.648269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.648468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.648511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.439 [2024-07-25 19:18:25.648736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.439 [2024-07-25 19:18:25.648779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.439 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.648973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.649000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.649154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.649182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.649411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.649454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.649664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.649692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.649952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.649998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.650192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.650237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.650462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.650505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.650698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.650742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.650885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.650913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.651112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.651139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.651360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.651403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.651572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.651615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.651834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.651878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.652070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.652096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.652301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.652345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.652573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.652615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.652823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.652851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.653045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.653072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.653251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.653295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.653488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.653518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.653762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.653807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.654004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.654031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.654203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.654249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.654454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.654498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.654669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.654712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.654881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.654907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.655116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.655143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.655326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.655354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.655529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.655573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.655777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.655821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.656024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.656051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.656249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.656297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.656504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.656531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.656722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.656770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.656947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.440 [2024-07-25 19:18:25.656974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.440 qpair failed and we were unable to recover it. 00:27:33.440 [2024-07-25 19:18:25.657147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.657178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.657364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.657408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.657631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.657675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.657887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.657930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.658132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.658161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.658332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.658381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.658603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.658647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.658849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.658893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.659073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.659099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.659283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.659310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.659479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.659523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.659692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.659737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.659945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.659972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.660143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.660173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.660383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.660412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.660621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.660664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.660842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.660888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.661089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.661121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.661295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.661340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.661562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.661605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.661838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.661882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.662056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.662082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.662282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.662327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.662524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.662570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.662763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.662806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.662949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.662975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.663202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.663247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.663413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.663459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.663660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.663704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.663842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.663868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.664014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.441 [2024-07-25 19:18:25.664042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.441 qpair failed and we were unable to recover it. 00:27:33.441 [2024-07-25 19:18:25.664246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.664291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.664516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.664560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.664762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.664805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.665061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.665088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.665318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.665362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.665590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.665633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.665830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.665872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.666046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.666077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.666268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.666313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.666502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.666547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.666745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.666788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.666986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.667012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.667207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.667252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.667436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.667481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.667685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.667730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.667925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.667970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.668187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.668230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.668448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.668490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.668699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.668744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.668921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.668948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.669091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.669124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.669326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.669369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.669557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.669601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.669801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.669846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.670020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.670047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.670251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.670280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.670469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.670513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.670742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.670787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.670958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.670985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.671174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.671221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.671487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.671531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.671756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.671799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.442 [2024-07-25 19:18:25.671974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.442 [2024-07-25 19:18:25.672001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.442 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.672197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.672246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.672467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.672511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.672701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.672732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.672933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.672960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.673157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.673185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.673360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.673403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.673589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.673618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.673828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.673857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.674051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.674078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.674283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.674310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.674532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.674561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.674759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.674788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.675003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.675032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.675248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.675275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.675473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.675502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.675704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.675733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.675981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.676034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.676230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.676257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.676479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.676508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.676749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.676778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.676959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.676988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.677211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.677238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.677435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.677462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.677691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.677733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.677965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.677994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.678207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.678234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.678461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.678490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.678688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.678718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.678911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.678946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.679171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.679198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.679374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.679418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.679648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.679674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.679868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.679898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.680086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.680172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.680350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.680377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.680581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.680610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.680849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.680878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.443 qpair failed and we were unable to recover it. 00:27:33.443 [2024-07-25 19:18:25.681094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.443 [2024-07-25 19:18:25.681146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.681321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.681347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.681539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.681569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.681905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.681963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.682175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.682202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.682379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.682406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.682605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.682631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.682834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.682863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.683082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.683125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.683294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.683321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.683498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.683524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.683676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.683703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.683901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.683928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.684151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.684180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.684366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.684395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.684558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.684584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.684733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.684774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.684963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.684993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.685185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.685216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.685417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.685446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.685609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.685638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.685851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.685877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.686080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.686116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.686332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.686361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.686577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.686603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.686755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.686781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.686967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.686993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.687190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.687218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.687446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.687475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.687693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.687722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.687919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.687946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.688091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.688123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.688318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.444 [2024-07-25 19:18:25.688348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.444 qpair failed and we were unable to recover it. 00:27:33.444 [2024-07-25 19:18:25.688543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.688569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.688745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.688771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.688946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.688972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.689150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.689177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.689353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.689378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.689573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.689602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.689794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.689820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.690037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.690066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.690238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.690264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.690436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.690462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.690634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.690660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.690830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.690857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.691048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.691074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.691307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.691336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.691518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.691546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.691734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.691761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.691972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.692001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.692196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.692226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.692419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.692445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.692660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.692689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.692882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.692911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.693106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.693133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.693326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.693354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.693552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.693581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.693774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.693800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.693987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.694016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.694197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.694227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.694438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.694464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.694684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.694713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.694926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.694955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.695152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.695178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.695377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.695403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.695635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.695664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.695876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.695902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.696114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.696144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.696331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.445 [2024-07-25 19:18:25.696361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.445 qpair failed and we were unable to recover it. 00:27:33.445 [2024-07-25 19:18:25.696577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.696603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.696794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.696823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.697012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.697041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.697263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.697289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.697518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.697547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.697758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.697786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.698001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.698028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.698227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.698256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.698471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.698499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.698700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.698726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.698939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.698968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.699154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.699184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.699398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.699424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.699603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.699631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.699820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.699849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.700017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.700043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.700211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.700238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.700414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.700446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.700667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.700694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.700886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.700913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.701108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.701134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.701365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.701391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.701618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.701644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.701813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.701839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.702012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.702038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.702194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.702221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.702445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.702474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.702666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.702692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.702864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.702890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.703114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.703143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.703317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.703343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.446 qpair failed and we were unable to recover it. 00:27:33.446 [2024-07-25 19:18:25.703543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.446 [2024-07-25 19:18:25.703569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.703741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.703770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.703964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.703991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.704165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.704191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.704411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.704440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.704641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.704667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.704840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.704866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.705111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.705138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.705312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.705338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.705532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.705562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.705765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.705792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.705976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.706002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.706178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.706205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.706420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.706454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.706652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.706678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.706870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.706899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.707116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.707143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.707307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.707333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.707483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.707508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.707678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.707704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.707901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.707927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.708091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.708128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.708281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.708310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.708531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.708558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.708745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.708774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.708972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.708998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.709146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.709173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.709374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.709404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.709591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.709620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.709773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.709799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.709993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.710021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.710183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.447 [2024-07-25 19:18:25.710214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.447 qpair failed and we were unable to recover it. 00:27:33.447 [2024-07-25 19:18:25.710378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.710404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.710622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.710651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.710837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.710866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.711062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.711088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.711289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.711319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.711521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.711547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.711719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.711746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.711965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.711994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.712185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.712219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.712411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.712437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.712592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.712619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.712787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.712813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.712997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.713023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.713218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.713247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.713437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.713466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.713654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.713680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.713855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.713881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.714089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.714122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.714260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.714286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.714445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.714474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.714672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.714698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.714910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.714937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.715118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.715148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.715344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.715373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.715569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.715595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.715812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.715841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.716030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.716059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.716285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.716312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.716535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.716564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.716767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.716796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.717008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.717035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.717241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.717270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.717437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.717466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.717693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.448 [2024-07-25 19:18:25.717719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.448 qpair failed and we were unable to recover it. 00:27:33.448 [2024-07-25 19:18:25.717921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.717948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.718142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.718169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.718378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.718404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.718595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.718624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.718819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.718845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.719015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.719042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.719262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.719292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.719490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.719519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.719733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.719759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.719915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.719941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.720116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.720144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.720318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.720344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.720517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.720544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.720713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.720739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.720912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.720938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.721140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.721170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.721359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.721388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.721567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.721593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.721808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.721837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.722034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.722060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.722262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.722288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.722460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.722489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.722680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.722711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.722908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.722934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.723088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.723122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.723316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.723345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.723512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.723540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.723758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.723787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.723972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.724002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.724214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.724242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.724438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.724467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.724651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.724679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.724909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.724935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.725161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.725191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.725350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.725379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.449 qpair failed and we were unable to recover it. 00:27:33.449 [2024-07-25 19:18:25.725538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.449 [2024-07-25 19:18:25.725564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.725755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.725784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.725983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.726012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.726203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.726231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.726407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.726433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.726610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.726636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.726831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.726857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.727045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.727079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.727274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.727300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.727468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.727495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.727691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.727720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.727995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.728049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.728268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.728295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.728441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.728467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.728631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.728657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.728804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.728830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.728975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.729002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.729210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.729252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.729438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.729467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.729664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.729691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.729864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.729891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.730111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.730141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.730339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.730365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.730570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.730596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.730933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.730989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.731205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.731235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.731416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.731443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.731611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.731638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.731833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.731859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.732059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.732089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.732279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.732306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.732455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.732482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.732652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.732679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.732912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.732941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.733125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.733158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.733351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.733377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.733591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.450 [2024-07-25 19:18:25.733621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.450 qpair failed and we were unable to recover it. 00:27:33.450 [2024-07-25 19:18:25.733787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.733817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.734034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.734064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.734295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.734322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.734597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.734650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.734867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.734896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.735114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.735153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.735383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.735413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.735575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.735605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.735774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.735805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.735990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.736019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.736195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.736221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.736401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.736438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.736680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.736709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.736870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.736901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.737126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.737156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.737360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.737390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.737572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.737600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.737795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.737831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.738022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.738049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.738256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.738285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.738456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.738487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.738679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.738711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.738910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.738937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.739127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.739163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.739341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.739368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.739553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.739583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.739781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.739808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.740002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.740032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.740232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.740262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.740475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.740505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.740697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.740724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.740897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.740924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.741067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.741094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.451 qpair failed and we were unable to recover it. 00:27:33.451 [2024-07-25 19:18:25.741326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.451 [2024-07-25 19:18:25.741356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.741526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.741552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.741810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.741865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.742079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.742115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.742339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.742368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.742574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.742602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.742855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.742885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.743042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.743072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.743312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.743339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.743548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.743575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.743792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.743822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.744029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.744056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.744211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.744237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.744418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.744445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.744703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.744755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.744984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.745011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.745164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.745191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.745366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.745392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.745632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.745663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.745889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.745919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.746114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.746152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.746342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.746368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.746654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.746708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.746928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.746958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.747175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.747202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.747398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.747424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.747591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.747621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.747815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.747845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.748060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.748089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.748284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.748310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.748503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.748535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.748693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.452 [2024-07-25 19:18:25.748724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.452 qpair failed and we were unable to recover it. 00:27:33.452 [2024-07-25 19:18:25.748880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.748914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.749089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.749124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.749345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.749374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.749606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.749636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.749848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.749874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.750025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.750052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.750225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.750252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.750464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.750493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.750654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.750684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.750881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.750908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.751070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.751099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.751284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.751311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.751522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.751552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.751747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.751774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.751997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.752026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.752243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.752272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.752487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.752517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.752699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.752726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.752962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.752993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.753179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.753209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.753409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.753436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.753611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.753638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.753899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.753952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.754169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.754197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.754370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.754413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.754582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.754610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.754814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.754877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.755097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.755160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.755379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.755408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.755609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.755638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.755854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.755884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.756070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.756100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.756309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.756337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.756535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.756562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.756762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.453 [2024-07-25 19:18:25.756793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-07-25 19:18:25.756977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.757007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.757221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.757248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.757443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.757470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.757643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.757670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.757902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.757932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.758134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.758165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.758358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.758386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.758725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.758784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.758971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.759001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.759177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.759207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.759399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.759425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.759597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.759624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.759834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.759862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.760007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.760034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.760213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.760241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.760393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.760420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.760614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.760641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.760832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.760861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.761021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.761048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.761218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.761251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.761398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.761424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.761639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.761669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.761868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.761895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.762080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.762114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.762318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.762345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.762495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.762522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.762694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.762721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.762915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.762942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.763117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.763145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.763321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.763348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.763520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.763547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.763716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.763743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.763890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.454 [2024-07-25 19:18:25.763916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-07-25 19:18:25.764067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.764094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.764257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.764284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.764453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.764480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.764625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.764652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.764823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.764851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.765047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.765074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.765234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.765262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.765434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.765461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.765638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.765665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.765812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.765838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.765990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.766017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.766206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.766234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.766401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.766428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.766636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.766662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.766840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.766867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.767037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.767064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.767247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.767276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.767417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.767443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.767600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.767628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.767820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.767847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.768023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.768050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.768253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.768281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.768447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.768473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.768624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.768650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.768827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.768854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.769026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.769053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.769226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.769254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.769406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.769434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.769631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.769659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.769805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.769833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.770005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.770033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.770174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.770200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.770347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.770374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.770524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.770551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.770728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.770756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.770901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.770928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.771078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.455 [2024-07-25 19:18:25.771112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-07-25 19:18:25.771313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.771340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.771483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.771510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.771686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.771715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.771897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.771924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.772110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.772137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.772318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.772345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.772516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.772543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.772678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.772710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.772884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.772911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.773088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.773123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.773293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.773320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.773485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.773512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.773677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.773705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.773876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.773903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.774107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.774135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.774276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.774303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.774484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.774511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.774649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.774681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.774878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.774905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.775054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.775082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.775258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.775286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.775460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.775487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.775646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.775672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.775852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.775879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.776051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.776077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.776239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.776267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.776407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.776434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.776602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.776629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.776774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.776801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.776971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.456 [2024-07-25 19:18:25.777010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.456 qpair failed and we were unable to recover it. 00:27:33.456 [2024-07-25 19:18:25.777180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.777208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.777415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.777442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.777610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.777637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.777811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.777838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.777987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.778014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.778180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.778207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.778384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.778410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.778557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.778584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.778775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.778802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.778979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.779005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.779151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.779178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.779328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.779355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.779576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.779605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.779801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.779832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.779997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.780039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.780204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.780230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.780406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.780433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.780601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.780628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.780826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.780853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.781054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.781082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.781298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.781325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.781504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.781546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.781737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.781763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.781946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.781973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.782123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.782150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.782305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.782333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.782531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.782558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.782700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.782727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.457 qpair failed and we were unable to recover it. 00:27:33.457 [2024-07-25 19:18:25.782931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.457 [2024-07-25 19:18:25.782958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.783100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.783138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.783310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.783337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.783507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.783534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.783700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.783727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.783899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.783926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.784096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.784129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.784283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.784310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.784515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.784545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.784760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.784790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.785011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.785038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.785185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.785213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.785382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.785409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.785609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.785636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.785815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.785842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.786017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.786044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.786218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.786246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.786437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.786466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.786692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.786719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.786890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.786916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.787071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.787098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.787286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.787313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.787483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.787510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.787692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.787719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.787861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.787887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.788064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.788091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.788322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.788350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.788575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.788605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.788829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.788858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.789044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.789073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.789296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.789323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.789525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.789552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.789751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.789778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.789930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.789956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.790153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.790181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.790354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.790382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.790536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.458 [2024-07-25 19:18:25.790564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.458 qpair failed and we were unable to recover it. 00:27:33.458 [2024-07-25 19:18:25.790735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.790762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.790958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.790984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.791162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.791189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.791354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.791381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.791539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.791566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.791724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.791750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.791895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.791924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.792076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.792109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.792259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.792286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.792463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.792490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.792666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.792693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.792834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.792861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.793004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.793031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.793223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.793251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.793394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.793421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.793569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.793596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.793765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.793792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.793939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.793970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.794143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.794170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.794323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.794350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.794546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.794575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.794792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.794818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.794972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.794999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.795173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.795200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.795349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.795376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.795558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.795585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.795719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.795746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.795921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.795948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.796097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.796130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.796279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.796306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.459 qpair failed and we were unable to recover it. 00:27:33.459 [2024-07-25 19:18:25.796462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.459 [2024-07-25 19:18:25.796489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.796645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.796672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.796869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.796896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.797049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.797075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.797270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.797297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.797446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.797473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.797622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.797649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.797833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.797860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.798005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.798032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.798211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.798238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.798410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.798437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.798585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.798612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.798757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.798784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.798927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.798954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.799097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.799134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.799306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.799333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.799512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.799540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.799705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.799732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.799879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.799906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.800053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.800080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.800238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.800266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.800465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.800494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.800692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.800718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.800890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.800917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.801114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.801144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.801307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.801337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.801521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.801551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.801747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.801774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.801968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.801995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.802175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.802203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.802350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.802377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.802519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.802547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.802687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.802714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.802880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.802911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.803109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.803139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.803331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.803358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.803553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.803602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.803791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.460 [2024-07-25 19:18:25.803821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.460 qpair failed and we were unable to recover it. 00:27:33.460 [2024-07-25 19:18:25.804009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.804039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.804230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.804257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.804446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.804499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.804663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.804697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.804890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.804917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.805073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.805100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.805287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.805314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.805473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.805503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.805693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.805722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.805893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.805920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.806094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.806130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.806273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.806300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.806500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.806530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.806733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.806759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.806929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.806959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.807133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.807161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.807332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.807359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.807576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.807603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.807779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.807805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.808031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.808061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.808289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.808317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.808469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.808496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.808719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.808767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.808962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.808989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.809146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.809173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.809344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.809370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.809547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.809578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.809798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.809824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.810001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.810028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.461 [2024-07-25 19:18:25.810184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.461 [2024-07-25 19:18:25.810211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.461 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.810430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.810460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.810667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.810693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.810891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.810921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.811091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.811127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.811282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.811309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.811504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.811534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.811734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.811764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.811940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.811967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.812122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.812150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.812301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.812328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.812534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.812561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.812704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.812731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.812945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.812974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.813132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.813162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.813337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.813366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.813558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.813585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.813766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.813816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.814037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.814064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.814243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.814273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.814439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.814466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.814653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.814683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.814876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.814908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.815118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.815148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.815321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.815348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.815520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.815547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.815731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.815758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.815926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.815955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.816128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.816157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.816338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.816364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.816558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.462 [2024-07-25 19:18:25.816588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-25 19:18:25.816754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.816785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.816955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.816982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.817128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.817169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.817392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.817418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.817584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.817611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.817782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.817809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.818004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.818033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.818197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.818228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.818387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.818418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.818630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.818657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.818880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.818910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.819111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.819146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.819340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.819369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.819556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.819583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.819734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.819760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.819899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.819941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.820150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.820180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.820370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.820397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.820603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.820651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.820842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.820872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.821043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.821073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.821268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.821296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.821468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.821496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.821665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.821692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.821867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.821894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.822081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.822114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.822295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.822324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.822538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.822567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.822769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.822796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.822960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.822987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.823162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.823192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.823379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.823408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.823607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.823637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.823827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.823855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.824022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.824049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.824220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.824248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.824436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.463 [2024-07-25 19:18:25.824465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.463 qpair failed and we were unable to recover it. 00:27:33.463 [2024-07-25 19:18:25.824683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.824710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.824899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.824950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.825166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.825196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.825351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.825381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.825551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.825578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.825746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.825791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.825959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.825988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.826202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.826230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.826374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.826400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.826567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.826597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.826763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.826793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.826973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.827000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.827179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.827207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.827386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.827415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.827576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.827605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.827806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.827836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.828007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.828034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.828197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.828224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.828367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.828416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.828592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.828622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.828845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.828871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.829027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.829056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.829232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.829263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.829430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.829461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.829681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.829708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.829859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.829887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.830058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.830088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.830288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.830319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.830498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.830524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.464 [2024-07-25 19:18:25.830754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.464 [2024-07-25 19:18:25.830783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.464 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.830972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.831002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.831198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.831227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.831397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.831424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.831583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.831613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.831808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.831838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.832024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.832054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.832258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.832286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.832515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.832563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.832727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.832756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.832926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.832955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.833141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.833169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.833369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.833398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.833570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.833600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.833805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.833831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.834022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.834049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.834228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.834258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.834445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.834474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.834645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.834674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.834846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.834872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.835063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.835092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.835289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.835319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.835479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.835509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.835683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.835710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.835873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.835900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.836065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.836095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.836291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.836321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.836508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.836535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.836731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.836757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.836955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.836985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.837156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.837186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.837361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.837388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.837536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.837562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.837714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.837741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.837932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.837962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.838153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.465 [2024-07-25 19:18:25.838180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.465 qpair failed and we were unable to recover it. 00:27:33.465 [2024-07-25 19:18:25.838346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.838376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.838594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.838624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.838807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.838837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.839009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.839036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.839253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.839290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.839481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.839511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.839704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.839734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.839897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.839923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.840069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.840096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.840298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.840325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.840493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.840522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.840687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.840714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.840864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.840911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.841131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.841173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.841343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.841373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.841568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.841594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.841807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.841854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.842012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.842042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.842226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.842253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.842408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.842435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.842632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.842659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.842873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.842900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.843092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.843129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.843297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.843324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.843517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.843563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.843772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.843802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.843961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.843990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.844157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.844184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.844339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.844366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.844587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.844616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.844780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.844809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.844969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.845000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.845219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.845249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.845418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.845448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.845667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.845697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.845867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.845893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.466 [2024-07-25 19:18:25.846081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.466 [2024-07-25 19:18:25.846117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.466 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.846312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.846342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.846551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.846580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.846790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.846816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.847008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.847037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.847225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.847255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.847453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.847482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.847679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.847706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.847874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.847919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.848126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.848153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.848350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.848376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.848522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.848549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.848699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.848743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.848956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.848986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.849182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.849212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.849403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.849430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.849592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.849622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.849781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.849810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.849983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.850013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.850232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.850259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.850434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.850464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.850656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.850686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.850901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.850932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.851089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.851123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.851323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.851353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.851516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.851551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.851711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.851741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.851943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.851971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.852125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.852152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.852299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.852326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.852524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.467 [2024-07-25 19:18:25.852554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.467 qpair failed and we were unable to recover it. 00:27:33.467 [2024-07-25 19:18:25.852734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.852761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.852918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.852945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.853124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.853154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.853348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.853379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.853611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.853638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.853781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120b230 is same with the state(5) to be set 00:27:33.468 [2024-07-25 19:18:25.854113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.854176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.854407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.854438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.854605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.854633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.854822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.854868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.855030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.855060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.855232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.855260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.855451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.855482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.855684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.855711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.855907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.855934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.856114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.856146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.856325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.856352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.856521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.856548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.856744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.856773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.856978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.857008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.857183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.857211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.857350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.857376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.857570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.857599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.857811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.857838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.858006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.858036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.858235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.858265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.858454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.858481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.858652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.858679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.858874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.858904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.859064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.859091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.859296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.859326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.859513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.859543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.859762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.859794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.859985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.860014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.860193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.860223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.860389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.860416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.860630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.860659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.468 [2024-07-25 19:18:25.860851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.468 [2024-07-25 19:18:25.860881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.468 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.861074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.861109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.861310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.861339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.861494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.861525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.861692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.861720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.861917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.861947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.862122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.862153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.862319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.862346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.862564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.862593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.862814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.862844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.863016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.863043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.863206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.863236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.863430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.863460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.863653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.863684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.863838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.863865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.864013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.864061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.864243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.864271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.864466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.864496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.864666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.864695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.864892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.864919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.865069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.865094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.865288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.865317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.865532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.865558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.865789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.865835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.866028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.866057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.866255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.866282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.866482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.866512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.866736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.866762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.866956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.866983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.867162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.867192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.867357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.867386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.867580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.867606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.867761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.867787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.867970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.868000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.868177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.868204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.868375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.868402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.868564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.868598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.868812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.868839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.469 [2024-07-25 19:18:25.869032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.469 [2024-07-25 19:18:25.869061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.469 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.869263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.869294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.869464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.869491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.869671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.869698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.869888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.869918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.870120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.870157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.870331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.870358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.870547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.870576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.870751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.870777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.870968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.870998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.871176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.871203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.871372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.871399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.871636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.871663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.871832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.871858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.872029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.872056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.872227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.872255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.872397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.872423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.872601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.872628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.872791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.872837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.873007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.873037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.873208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.873235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.873387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.873414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.873659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.873685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.873838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.873864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.874056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.874085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.874256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.874289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.874487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.874513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.874702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.874732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.874892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.874922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.875089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.875123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.875295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.875322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.875495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.875525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.875720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.875747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.875960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.875990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.876155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.876186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.876372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.876398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.876565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.876594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.876789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.470 [2024-07-25 19:18:25.876815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.470 qpair failed and we were unable to recover it. 00:27:33.470 [2024-07-25 19:18:25.876983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-07-25 19:18:25.877010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-07-25 19:18:25.877170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-07-25 19:18:25.877197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-07-25 19:18:25.877387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-07-25 19:18:25.877416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-07-25 19:18:25.877600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-07-25 19:18:25.877627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-07-25 19:18:25.877812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-07-25 19:18:25.877840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-07-25 19:18:25.878064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-07-25 19:18:25.878117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-07-25 19:18:25.878345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-07-25 19:18:25.878385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-07-25 19:18:25.878600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-07-25 19:18:25.878629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-07-25 19:18:25.878818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-07-25 19:18:25.878848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-07-25 19:18:25.879022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-07-25 19:18:25.879048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-07-25 19:18:25.879203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-07-25 19:18:25.879250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-07-25 19:18:25.879446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-07-25 19:18:25.879475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-07-25 19:18:25.879674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-07-25 19:18:25.879700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-07-25 19:18:25.879875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-07-25 19:18:25.879904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-07-25 19:18:25.880063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-07-25 19:18:25.880118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-07-25 19:18:25.880290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-07-25 19:18:25.880317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-07-25 19:18:25.880514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-07-25 19:18:25.880544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-07-25 19:18:25.880738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-07-25 19:18:25.880764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-07-25 19:18:25.880940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-07-25 19:18:25.880966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-07-25 19:18:25.881143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-07-25 19:18:25.881171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-07-25 19:18:25.881383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-07-25 19:18:25.881413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-07-25 19:18:25.881622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-07-25 19:18:25.881662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.471 [2024-07-25 19:18:25.881866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.471 [2024-07-25 19:18:25.881899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.471 qpair failed and we were unable to recover it. 00:27:33.749 [2024-07-25 19:18:25.882081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.749 [2024-07-25 19:18:25.882122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.749 qpair failed and we were unable to recover it. 00:27:33.749 [2024-07-25 19:18:25.882293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.749 [2024-07-25 19:18:25.882320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.749 qpair failed and we were unable to recover it. 00:27:33.749 [2024-07-25 19:18:25.882528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.749 [2024-07-25 19:18:25.882557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.749 qpair failed and we were unable to recover it. 00:27:33.749 [2024-07-25 19:18:25.882720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.749 [2024-07-25 19:18:25.882749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.749 qpair failed and we were unable to recover it. 00:27:33.749 [2024-07-25 19:18:25.882909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.749 [2024-07-25 19:18:25.882936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.749 qpair failed and we were unable to recover it. 00:27:33.749 [2024-07-25 19:18:25.883092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.749 [2024-07-25 19:18:25.883131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.749 qpair failed and we were unable to recover it. 00:27:33.749 [2024-07-25 19:18:25.883278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.749 [2024-07-25 19:18:25.883322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.749 qpair failed and we were unable to recover it. 00:27:33.749 [2024-07-25 19:18:25.883498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.749 [2024-07-25 19:18:25.883525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.749 qpair failed and we were unable to recover it. 00:27:33.749 [2024-07-25 19:18:25.883681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.749 [2024-07-25 19:18:25.883711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.749 qpair failed and we were unable to recover it. 00:27:33.749 [2024-07-25 19:18:25.883915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.749 [2024-07-25 19:18:25.883945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.749 qpair failed and we were unable to recover it. 00:27:33.749 [2024-07-25 19:18:25.884126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.749 [2024-07-25 19:18:25.884153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.749 qpair failed and we were unable to recover it. 00:27:33.749 [2024-07-25 19:18:25.884334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.749 [2024-07-25 19:18:25.884360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.749 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.884555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.884585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.884764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.884791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.884998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.885027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.885243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.885271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.885475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.885501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.885700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.885730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.885893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.885922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.886108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.886136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.886337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.886367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.886541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.886570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.886737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.886764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.886913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.886956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.887166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.887196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.887394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.887420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.887573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.887616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.887798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.887826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.888021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.888047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.888212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.888240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.888459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.888489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.888694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.888720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.888897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.888927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.889144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.889175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.889345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.889378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.889582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.889611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.889806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.889835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.890031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.890058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.890249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.890276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.890461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.890491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.890689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.890717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.890895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.890922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.891114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.891144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.891331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.891358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.891575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.891604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.891816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.891846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.892025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.892053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.892235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.892261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.892485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-07-25 19:18:25.892514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.750 qpair failed and we were unable to recover it. 00:27:33.750 [2024-07-25 19:18:25.892713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.892740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.892955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.892984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.893205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.893236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.893439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.893466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.893656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.893685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.893898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.893928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.894140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.894167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.894355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.894385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.894572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.894602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.894827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.894854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.895054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.895090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.895288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.895317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.895486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.895513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.895726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.895756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.895979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.896006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.896186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.896212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.896414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.896441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.896640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.896670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.896863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.896889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.897079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.897118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.897295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.897322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.897519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.897546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.897738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.897768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.897980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.898009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.898224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.898252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.898445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.898476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.898643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.898672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.898890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.898916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.899111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.899142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.899358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.899384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.899553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.899580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.899731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.899758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.899936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.899963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.900131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.900158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.900320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.900347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.900572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.900601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.900822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-07-25 19:18:25.900850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.751 qpair failed and we were unable to recover it. 00:27:33.751 [2024-07-25 19:18:25.901052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.901085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.901333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.901360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.901530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.901557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.901705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.901731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.901901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.901927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.902100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.902137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.902363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.902396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.902567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.902600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.902792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.902818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.903041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.903067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.903257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.903284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.903458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.903486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.903683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.903724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.903883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.903914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.904154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.904181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.904379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.904422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.904602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.904631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.904802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.904829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.905012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.905039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.905194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.905221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.905398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.905424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.905621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.905651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.905854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.905880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.906077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.906110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.906315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.906345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.906535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.906564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.906722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.906748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.906966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.907001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.907194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.907225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.907422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.907449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.907636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.907663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.907892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.907923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.908097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.908132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.908314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.908340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.908522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.908549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.908747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.908774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.908995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.909024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.752 qpair failed and we were unable to recover it. 00:27:33.752 [2024-07-25 19:18:25.909227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-07-25 19:18:25.909254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.909456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.909483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.909678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.909709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.909877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.909908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.910132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.910159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.910367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.910396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.910556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.910585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.910816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.910843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.911019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.911051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.911266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.911292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.911490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.911516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.911707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.911737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.911925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.911955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.912145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.912172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.912346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.912373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.912611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.912641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.912841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.912867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.913059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.913089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.913302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.913329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.913503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.913531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.913720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.913750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.913939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.913969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.914165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.914193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.914377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.914407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.914622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.914652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.914866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.914892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.915088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.915124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.915312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.915341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.915508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.915535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.915735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.915764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.915959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.915986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.916166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.916197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.916399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.916443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.916636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.916666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.916865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.916891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.917061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.917087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.917258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.753 [2024-07-25 19:18:25.917285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.753 qpair failed and we were unable to recover it. 00:27:33.753 [2024-07-25 19:18:25.917481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.917508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.917689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.917715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.917868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.917912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.918113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.918141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.918334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.918363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.918575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.918605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.918801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.918827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.919063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.919090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.919296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.919326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.919541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.919567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.919719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.919746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.919900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.919944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.920160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.920187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.920343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.920370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.920609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.920636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.920836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.920862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.921013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.921041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.921232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.921263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.921456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.921483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.921648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.921677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.921893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.921923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.922122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.922154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.922306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.922333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.922506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.922532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.922724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.922751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.922968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.922998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.923189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.923216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.754 qpair failed and we were unable to recover it. 00:27:33.754 [2024-07-25 19:18:25.923394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.754 [2024-07-25 19:18:25.923421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.923612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.923642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.923834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.923863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.924040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.924066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.924243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.924270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.924484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.924514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.924704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.924730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.924907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.924934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.925086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.925129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.925297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.925324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.925517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.925547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.925709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.925738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.925934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.925961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.926128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.926156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.926319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.926348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.926542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.926569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.926764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.926793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.926977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.927006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.927231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.927258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.927489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.927519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.927676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.927706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.927924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.927957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.928177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.928207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.928391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.928420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.928614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.928641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.928794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.928821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.928995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.929022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.929214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.929241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.929469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.929499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.929705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.929735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.929896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.929923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.930120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.930150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.930333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.930363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.930577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.930603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.930789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.930818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.931018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.931049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.931242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.931269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.931456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.755 [2024-07-25 19:18:25.931486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.755 qpair failed and we were unable to recover it. 00:27:33.755 [2024-07-25 19:18:25.931681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.931710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.931900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.931927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.932146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.932177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.932378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.932407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.932602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.932628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.932824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.932853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.933019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.933049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.933246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.933273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.933448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.933475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.933645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.933672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.933878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.933905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.934133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.934163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.934321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.934350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.934541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.934568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.934786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.934815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.935018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.935047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.935225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.935253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.935424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.935451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.935651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.935677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.935903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.935929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.936128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.936158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.936348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.936378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.936579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.936606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.936800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.936829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.937000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.937029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.937225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.937252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.937403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.937430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.937653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.937682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.937879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.937906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.938091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.938130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.938329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.938359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.938551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.938578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.938770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.938800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.939021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.939048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.939247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.939275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.939415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.939441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.939618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.939645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.756 qpair failed and we were unable to recover it. 00:27:33.756 [2024-07-25 19:18:25.939849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.756 [2024-07-25 19:18:25.939876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.940036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.940063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.940246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.940273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.940437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.940464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.940655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.940684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.940904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.940935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.941152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.941179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.941381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.941410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.941601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.941631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.941829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.941856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.942029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.942056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.942280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.942307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.942479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.942506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.942724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.942753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.942965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.942995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.943134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.943162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.943351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.943390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.943589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.943619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.943816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.943842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.944054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.944084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.944250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.944281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.944455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.944493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.944709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.944742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.944915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.944945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.945137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.945173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.945352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.945381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.945568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.945597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.945796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.945823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.946000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.946027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.946201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.946229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.946400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.946426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.946644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.946673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.946840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.946871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.947088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.947127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.947291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.947318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.947512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.947541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.947739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.947764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.947954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.757 [2024-07-25 19:18:25.947984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.757 qpair failed and we were unable to recover it. 00:27:33.757 [2024-07-25 19:18:25.948207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.948234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.948410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.948436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.948628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.948657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.948811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.948846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.949071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.949097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.949311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.949337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.949489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.949516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.949710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.949737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.949911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.949938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.950112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.950142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.950359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.950385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.950591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.950617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.950769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.950796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.950996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.951022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.951171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.951198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.951408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.951437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.951603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.951629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.951819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.951849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.952061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.952090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.952312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.952338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.952541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.952570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.952746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.952773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.952999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.953029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.953229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.953256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.953409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.953435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.953609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.953635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.953824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.953853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.954038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.954067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.954260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.954287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.954503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.954532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.954699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.954734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.954908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.954935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.955091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.955124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.955313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.955345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.955507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.955534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.955685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.955711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.955865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.955891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.956061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.758 [2024-07-25 19:18:25.956088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.758 qpair failed and we were unable to recover it. 00:27:33.758 [2024-07-25 19:18:25.956248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.956275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.956492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.956521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.956720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.956746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.956942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.956969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.957200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.957230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.957402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.957428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.957619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.957648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.957834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.957864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.958057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.958084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.958270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.958296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.958491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.958520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.958717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.958744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.959027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.959081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.959256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.959285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.959454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.959480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.959672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.959702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.959862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.959891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.960096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.960132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.960324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.960351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.960555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.960584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.960799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.960825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.960998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.961024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.961176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.961203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.961403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.961430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.961656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.961685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.961877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.961906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.962124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.962151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.962350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.962376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.962567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.962597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.962775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.962802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.962994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.963023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.759 [2024-07-25 19:18:25.963204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.759 [2024-07-25 19:18:25.963231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.759 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.963430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.963456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.963659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.963693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.963865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.963895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.964082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.964115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.964298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.964325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.964488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.964517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.964707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.964733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.964912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.964939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.965133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.965164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.965361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.965388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.965579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.965610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.965823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.965853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.966074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.966100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.966330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.966359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.966555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.966585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.966785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.966811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.967001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.967030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.967247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.967278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.967442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.967468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.967689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.967718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.967896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.967925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.968187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.968215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.968439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.968468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.968655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.968686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.968852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.968879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.969046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.969073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.969280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.969307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.969457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.969483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.969706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.969739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.969927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.969957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.970158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.970185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.970377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.970406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.970564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.970595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.970768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.970795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.970971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.970998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.971187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.971218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.971442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.971469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.760 qpair failed and we were unable to recover it. 00:27:33.760 [2024-07-25 19:18:25.971660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.760 [2024-07-25 19:18:25.971689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.971900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.971929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.972128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.972155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.972309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.972335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.972513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.972539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.972716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.972743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.972959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.972988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.973177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.973207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.973397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.973424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.973598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.973624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.973812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.973842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.974037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.974064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.974264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.974293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.974510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.974539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.974704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.974731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.974925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.974954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.975143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.975173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.975362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.975389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.975608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.975642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.975840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.975870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.976084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.976121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.976339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.976365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.976589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.976619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.976815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.976841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.977062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.977091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.977278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.977305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.977502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.977528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.977698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.977727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.977912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.977939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.978137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.978165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.978359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.978388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.978567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.978597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.978805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.978831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.979003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.979029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.979222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.979253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.979449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.979476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.979691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.979720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.979925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.761 [2024-07-25 19:18:25.979952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.761 qpair failed and we were unable to recover it. 00:27:33.761 [2024-07-25 19:18:25.980127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.980156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.980343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.980373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.980597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.980627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.980829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.980856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.981075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.981109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.981315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.981344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.981551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.981577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.981797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.981826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.982013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.982040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.982219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.982246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.982417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.982446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.982659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.982688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.982861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.982888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.983061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.983088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.983255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.983283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.983422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.983449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.983615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.983641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.983840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.983870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.984062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.984092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.984316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.984342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.984505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.984534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.984763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.984790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.985018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.985047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.985252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.985280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.985454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.985480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.985674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.985705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.985892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.985922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.986116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.986143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.986330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.986359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.986553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.986582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.986745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.986772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.986964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.986991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.987168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.987198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.987371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.987398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.987540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.987582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.987788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.987814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.987985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.988011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.762 qpair failed and we were unable to recover it. 00:27:33.762 [2024-07-25 19:18:25.988200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.762 [2024-07-25 19:18:25.988231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.988409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.988438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.988631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.988658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.988861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.988890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.989083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.989123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.989349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.989382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.989586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.989613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.989788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.989815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.990045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.990071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.990283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.990313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.990504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.990533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.990725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.990756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.990970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.991000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.991215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.991246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.991433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.991459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.991673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.991702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.991913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.991943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.992133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.992177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.992333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.992359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.992547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.992576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.992788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.992814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.993004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.993034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.993240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.993267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.993445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.993472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.993666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.993697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.993894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.993924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.994124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.994163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.994357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.994384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.994550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.994580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.994797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.994823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.994977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.995004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.995220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.995250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.763 [2024-07-25 19:18:25.995449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.763 [2024-07-25 19:18:25.995477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.763 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:25.995664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:25.995705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:25.995865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:25.995895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:25.996093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:25.996128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:25.996346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:25.996376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:25.996595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:25.996622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:25.996821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:25.996851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:25.997072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:25.997108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:25.997284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:25.997310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:25.997458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:25.997485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:25.997674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:25.997704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:25.997916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:25.997946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:25.998145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:25.998181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:25.998377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:25.998407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:25.998608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:25.998635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:25.998828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:25.998855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:25.999070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:25.999100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:25.999307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:25.999334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:25.999503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:25.999529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:25.999752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:25.999781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:25.999951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:25.999981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:26.000198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:26.000225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:26.000422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:26.000452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:26.000603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:26.000641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:26.000843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:26.000875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:26.001076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:26.001114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:26.001305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:26.001334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:26.001532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:26.001558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:26.001755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:26.001781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:26.002006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:26.002035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:26.002266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:26.002294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:26.002493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:26.002519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:26.002693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:26.002719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:26.002894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:26.002920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:26.003145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:26.003175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:26.003370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:26.003399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:26.003625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:26.003652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:26.003851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.764 [2024-07-25 19:18:26.003880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.764 qpair failed and we were unable to recover it. 00:27:33.764 [2024-07-25 19:18:26.004072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.004109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.004304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.004331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.004504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.004535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.004705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.004735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.004951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.004978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.005197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.005227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.005409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.005436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.005612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.005639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.005831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.005860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.006057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.006087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.006298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.006325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.006503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.006530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.006705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.006731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.006898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.006925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.007116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.007160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.007333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.007360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.007562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.007589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.007763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.007789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.008015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.008044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.008268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.008295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.008448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.008475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.008648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.008674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.008851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.008878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.009081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.009118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.009345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.009372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.009567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.009594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.009784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.009810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.009954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.009995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.010194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.010222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.010375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.010403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.010557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.010584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.010756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.010782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.010961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.010992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.011210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.011238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.011418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.011445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.011642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.011670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.011852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.011882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.765 [2024-07-25 19:18:26.012059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.765 [2024-07-25 19:18:26.012088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.765 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.012290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.012318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.012497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.012527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.012697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.012724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.012914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.012943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.013119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.013161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.013349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.013386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.013585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.013615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.013808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.013838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.014030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.014057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.014217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.014244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.014438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.014468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.014669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.014696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.014853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.014879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.015071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.015100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.015308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.015335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.015519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.015549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.015719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.015748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.015949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.015975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.016118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.016173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.016395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.016422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.016565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.016592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.016811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.016841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.017010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.017047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.017244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.017271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.017436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.017466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.017636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.017674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.017890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.017917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.018087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.018126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.018324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.018353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.018542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.018569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.018786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.018815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.019008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.019037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.019237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.019266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.019422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.019472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.019674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.019701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.019876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.019902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.020099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.020165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.766 [2024-07-25 19:18:26.020315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.766 [2024-07-25 19:18:26.020342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.766 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.020511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.020538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.020704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.020731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.020920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.020949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.021158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.021185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.021327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.021353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.021550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.021577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.021752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.021778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.021957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.021983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.022124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.022151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.022327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.022353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.022551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.022580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.022755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.022785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.023009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.023036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.023220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.023246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.023422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.023453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.023597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.023623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.023796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.023829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.024010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.024042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.024245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.024272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.024463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.024493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.024657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.024687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.024885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.024911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.025082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.025120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.025289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.025318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.025517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.025544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.025757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.025786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.025975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.026005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.026174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.026201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.026381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.026426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.026619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.026646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.026816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.026843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.027005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.027034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.027201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.027230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.027399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.027429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.027619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.027648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.027873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.027900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.028066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.028093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.767 [2024-07-25 19:18:26.028261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.767 [2024-07-25 19:18:26.028287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.767 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.028464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.028494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.028664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.028692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.028866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.028898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.029092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.029130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.029340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.029367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.029537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.029564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.029708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.029747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.029930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.029961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.030135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.030172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.030330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.030356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.030535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.030562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.030756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.030786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.030957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.030987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.031158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.031186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.031356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.031386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.031538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.031592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.031811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.031838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.032044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.032074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.032283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.032309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.032474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.032501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.032671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.032701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.032886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.032916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.033125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.033155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.033305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.033331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.033539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.033569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.033766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.033792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.033970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.033996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.034152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.034183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.034377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.034404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.034603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.034632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.034796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.034826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.035024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.768 [2024-07-25 19:18:26.035051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.768 qpair failed and we were unable to recover it. 00:27:33.768 [2024-07-25 19:18:26.035209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.035239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.035406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.035436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.035605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.035632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.035776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.035803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.035948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.035991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.036166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.036193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.036339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.036386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.036563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.036590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.036792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.036819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.037019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.037045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.037198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.037225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.037394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.037420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.037583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.037614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.037778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.037808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.038001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.038027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.038225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.038253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.038465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.038495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.038693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.038720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.038891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.038921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.039139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.039168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.039342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.039371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.039564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.039593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.039764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.039795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.039994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.040027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.040206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.040233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.040438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.040467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.040672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.040698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.040888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.040918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.041114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.041154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.041347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.041378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.041573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.041603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.041774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.041803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.042002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.042029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.042186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.042213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.042387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.042414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.042584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.042611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.042749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.769 [2024-07-25 19:18:26.042776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.769 qpair failed and we were unable to recover it. 00:27:33.769 [2024-07-25 19:18:26.042927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.042971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.043167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.043194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.043391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.043426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.043600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.043630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.043822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.043849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.044010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.044040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.044221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.044249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.044405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.044433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.044579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.044607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.044778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.044805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.044950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.044977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.045141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.045168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.045362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.045393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.045591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.045618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.045762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.045789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.045964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.045993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.046184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.046212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.046380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.046423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.046612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.046642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.046834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.046865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.047053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.047082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.047294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.047321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.047492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.047519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.047659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.047685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.047840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.047884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.048111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.048153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.048324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.048366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.048540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.048570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.048766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.048793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.048946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.048976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.049143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.049171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.049345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.049378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.049520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.049551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.049695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.049738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.049980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.050009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.050190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.050217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.050367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.050409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.050615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.770 [2024-07-25 19:18:26.050641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.770 qpair failed and we were unable to recover it. 00:27:33.770 [2024-07-25 19:18:26.050782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.050809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.050947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.050973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.051159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.051186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.051359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.051385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.051580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.051609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.051780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.051807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.052003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.052032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.052218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.052245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.052401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.052427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.052579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.052606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.052755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.052781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.052947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.052974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.053147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.053174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.053322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.053349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.053496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.053524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.053689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.053716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.053904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.053933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.054166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.054194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.054371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.054414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.054611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.054641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.054833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.054860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.055063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.055092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.055261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.055290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.055457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.055483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.055679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.055708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.055922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.055951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.056173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.056200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.056367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.056394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.056559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.056590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.056784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.056811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.057001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.057030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.057213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.057240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.057392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.057422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.057611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.057640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.057833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.057863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.058055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.058081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.058252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.058278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.058447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.771 [2024-07-25 19:18:26.058474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.771 qpair failed and we were unable to recover it. 00:27:33.771 [2024-07-25 19:18:26.058651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.058677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.058825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.058851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.059023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.059049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.059224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.059251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.059430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.059457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.059629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.059655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.059806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.059832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.060000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.060030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.060251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.060281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.060475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.060502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.060691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.060720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.060880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.060910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.061124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.061172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.061326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.061352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.061560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.061589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.061760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.061786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.061981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.062010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.062192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.062222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.062408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.062434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.062639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.062666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.062814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.062856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.063045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.063075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.063266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.063294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.063491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.063521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.063685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.063711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.063907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.063936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.064126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.064156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.064337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.064363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.064531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.064560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.064730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.064760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.064929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.064955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.065148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.065175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.065320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.065347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.065530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.065557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.065726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.065755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.065947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.065976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.066166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.066192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.066344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.772 [2024-07-25 19:18:26.066371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.772 qpair failed and we were unable to recover it. 00:27:33.772 [2024-07-25 19:18:26.066584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.066613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.066784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.066811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.067008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.067037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.067241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.067268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.067415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.067441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.067616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.067642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.067856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.067882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.068075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.068112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.068302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.068329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.068511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.068540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.068714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.068744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.068883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.068910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.069115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.069145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.069326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.069352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.069533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.069563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.069731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.069760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.069980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.070007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.070179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.070209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.070396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.070426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.070591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.070617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.070835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.070864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.071021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.071050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.071243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.071270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.071461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.071489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.071684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.071714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.071884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.071910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.072098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.072136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.072330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.072360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.072557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.072583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.072778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.072808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.073001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.073030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.773 [2024-07-25 19:18:26.073225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.773 [2024-07-25 19:18:26.073252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.773 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.073403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.073429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.073608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.073635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.073809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.073835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.074026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.074055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.074221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.074251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.074442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.074468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.074692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.074722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.074906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.074935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.075129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.075156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.075369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.075398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.075615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.075644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.075812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.075839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.075977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.076005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.076179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.076208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.076426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.076452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.076632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.076662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.076830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.076859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.077114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.077158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.077333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.077360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.077583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.077610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.077751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.077777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.077991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.078020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.078194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.078220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.078399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.078425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.078594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.078623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.078824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.078851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.079024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.079051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.079247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.079278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.079466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.079495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.079694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.079721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.079923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.079952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.080170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.080201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.080376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.080403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.080624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.080653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.080875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.080904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.081100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.081135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.081315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.081342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.774 [2024-07-25 19:18:26.081540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.774 [2024-07-25 19:18:26.081569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.774 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.081785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.081812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.082007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.082036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.082218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.082248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.082461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.082488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.082703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.082732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.082951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.082980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.083169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.083196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.083398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.083427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.083651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.083682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.083878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.083905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.084122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.084151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.084359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.084386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.084588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.084614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.084806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.084835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.084995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.085024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.085242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.085269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.085442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.085472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.085680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.085709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.085927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.085953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.086131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.086158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.086302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.086328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.086492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.086519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.086695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.086722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.086893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.086919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.087070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.087096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.087314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.087343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.087529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.087558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.087753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.087780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.087947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.087974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.088142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.088169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.088364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.088390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.088588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.088618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.088835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.088864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.089073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.089108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.089304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.089330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.089552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.089586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.089781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.089808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.775 qpair failed and we were unable to recover it. 00:27:33.775 [2024-07-25 19:18:26.089955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.775 [2024-07-25 19:18:26.089981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.090145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.090172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.090350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.090377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.090573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.090602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.090770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.090799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.090988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.091014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.091200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.091231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.091424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.091453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.091643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.091669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.091858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.091887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.092085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.092119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.092273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.092300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.092467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.092497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.092687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.092717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.092909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.092936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.093183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.093213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.093406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.093434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.093652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.093678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.093852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.093879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.094065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.094094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.094297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.094324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.094490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.094522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.094716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.094745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.094905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.094931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.095127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.095157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.095352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.095382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.095579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.095605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.095750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.095776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.095967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.095993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.096163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.096190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.096360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.096407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.096612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.096638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.096824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.096850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.097073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.097099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.097307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.097348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.097567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.097593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.097793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.097822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.097978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.098007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.776 qpair failed and we were unable to recover it. 00:27:33.776 [2024-07-25 19:18:26.098210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.776 [2024-07-25 19:18:26.098238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.098384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.098429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.098580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.098609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.098804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.098831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.099011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.099037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.099212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.099239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.099445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.099471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.099656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.099685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.099866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.099893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.100088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.100122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.100305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.100334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.100513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.100542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.100732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.100758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.100969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.100999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.101205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.101232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.101410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.101437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.101607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.101636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.101837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.101866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.102127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.102156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.102355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.102382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.102577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.102606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.102805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.102832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.103022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.103051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.103283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.103310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.103562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.103589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.103834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.103860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.104072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.104109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.104302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.104329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.104485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.104518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.104672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.104699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.104874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.104903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.105056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.105083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.777 [2024-07-25 19:18:26.105296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.777 [2024-07-25 19:18:26.105325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.777 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.105547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.105574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.105740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.105777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.105961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.105991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.106182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.106209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.106351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.106387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.106591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.106621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.106797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.106827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.107003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.107040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.107275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.107304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.107465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.107492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.107638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.107667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.107847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.107877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.108040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.108067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.108218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.108245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.108405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.108434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.108631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.108659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.108854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.108884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.109092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.109127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.109321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.109349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.109572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.109602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.109792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.109822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.110004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.110030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.110185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.110217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.110410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.110440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.110636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.110665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.110925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.110955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.111152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.111183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.111381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.111408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.111628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.111672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.111834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.111863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.112048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.112075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.112309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.112336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.112496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.112523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.112697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.112737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.112952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.112982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.113171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.113202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.113409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.113435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.778 qpair failed and we were unable to recover it. 00:27:33.778 [2024-07-25 19:18:26.113626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.778 [2024-07-25 19:18:26.113655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.113816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.113845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.114016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.114042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.114217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.114244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.114389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.114431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.114616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.114642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.114813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.114840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.115044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.115074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.115306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.115334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.115562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.115592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.115816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.115845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.116061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.116110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.116339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.116373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.116574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.116602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.116772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.116799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.117022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.117051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.117271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.117301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.117500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.117528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.117678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.117705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.117896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.117926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.118196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.118223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.118443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.118473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.118696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.118723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.118869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.118896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.119045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.119073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.119253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.119280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.119487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.119514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.119733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.119763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.119954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.119984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.120180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.120207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.120383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.120410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.120581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.120612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.120785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.120812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.121001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.121031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.121245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.121275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.121469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.121496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.121690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.121719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.121935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.121965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.779 qpair failed and we were unable to recover it. 00:27:33.779 [2024-07-25 19:18:26.122158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.779 [2024-07-25 19:18:26.122186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.122335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.122362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.122553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.122580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.122811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.122838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.123001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.123030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.123224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.123252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.123452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.123479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.123688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.123715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.123913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.123939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.124143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.124171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.124345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.124372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.124521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.124547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.124731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.124758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.124951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.124981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.125184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.125212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.125382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.125410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.125605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.125634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.125833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.125862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.126036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.126063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.126246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.126273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.126465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.126497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.126667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.126695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.126864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.126891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.127114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.127154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.127371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.127398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.127610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.127639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.127827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.127856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.128048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.128074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.128245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.128273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.128432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.128474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.128699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.128725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.128897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.128924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.129099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.129134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.129308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.129334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.129548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.129577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.129769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.129798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.129999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.130026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.130195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.130222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.780 qpair failed and we were unable to recover it. 00:27:33.780 [2024-07-25 19:18:26.130415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.780 [2024-07-25 19:18:26.130444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.130633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.130660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.130853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.130882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.131041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.131071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.131328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.131370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.131572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.131601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.131756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.131786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.131956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.131982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.132127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.132158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.132321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.132363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.132587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.132614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.132809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.132839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.133050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.133080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.133286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.133313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.133535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.133565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.133759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.133789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.133990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.134018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.134236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.134264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.134464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.134494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.134721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.134748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.134968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.134997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.135183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.135213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.135411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.135437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.135592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.135620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.135771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.135797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.135995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.136028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.136241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.136271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.136473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.136501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.136677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.136707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.136873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.136900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.137098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.137131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.137306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.137336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.137537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.137567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.137741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.137770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.137963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.137989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.138185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.138216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.138423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.138452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.138651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.781 [2024-07-25 19:18:26.138678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.781 qpair failed and we were unable to recover it. 00:27:33.781 [2024-07-25 19:18:26.138831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.138858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.139009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.139053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.139270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.139297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.139459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.139488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.139677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.139706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.139870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.139896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.140060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.140090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.140302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.140331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.140523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.140550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.140748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.140777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.140996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.141025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.141232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.141259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.141417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.141445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.141628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.141655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.141828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.141855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.142041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.142068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.142250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.142277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.142502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.142531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.142745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.142775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.143025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.143054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.143260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.143288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.143444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.143471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.143666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.143692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.143883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.143912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.144087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.144125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.144323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.144360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.144550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.144580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.144738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.144767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.144939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.144965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.145155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.145183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.145378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.145407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.145624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.145651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.782 [2024-07-25 19:18:26.145833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.782 [2024-07-25 19:18:26.145860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.782 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.146074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.146110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.146332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.146381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.146562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.146592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.146816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.146860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.147081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.147135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.147329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.147356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.147548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.147597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.147984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.148042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.148190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.148219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.148419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.148446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.148713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.148765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.148979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.149022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.149185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.149213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.149446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.149490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.149679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.149728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.149884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.149912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.150088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.150127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.150338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.150377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.150603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.150647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.150854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.150899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.151046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.151074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.151320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.151363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.151566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.151610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.151825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.151868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.152071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.152098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.152286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.152331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.152515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.152542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.152738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.152782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.152978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.153005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.153225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.153270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.153495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.153538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.153798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.153842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.154041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.154067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.154263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.154314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.154542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.154586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.154786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.154830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.155010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.155037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.783 qpair failed and we were unable to recover it. 00:27:33.783 [2024-07-25 19:18:26.155239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.783 [2024-07-25 19:18:26.155284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.155484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.155528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.155726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.155770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.155942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.155969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.156164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.156195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.156413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.156458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.156666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.156710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.156883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.156910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.157088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.157123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.157312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.157356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.157551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.157595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.157828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.157871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.158054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.158081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.158324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.158369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.158561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.158605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.158808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.158852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.159008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.159036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.159252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.159302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.159531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.159574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.159769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.159812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.159988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.160016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.160215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.160260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.160460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.160505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.160682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.160726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.160881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.160910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.161111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.161139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.161333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.161360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.161629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.161672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.161897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.161941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.162122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.162150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.162352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.162382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.162603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.162649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.162840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.162884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.163060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.163086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.163282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.163309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.163544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.163588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.163762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.163808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.784 qpair failed and we were unable to recover it. 00:27:33.784 [2024-07-25 19:18:26.164011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.784 [2024-07-25 19:18:26.164038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.164190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.164217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.164420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.164449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.164690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.164733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.164936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.164980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.165171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.165215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.165438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.165481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.165677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.165706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.165919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.165949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.166139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.166167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.166398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.166441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.166671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.166715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.166869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.166896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.167071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.167099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.167330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.167373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.167598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.167642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.167828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.167855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.168040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.168068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.168303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.168347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.168542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.168587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.168792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.168840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.169038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.169065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.169272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.169300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.169498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.169544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.169749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.169792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.169963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.169990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.170182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.170228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.170398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.170444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.170615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.170660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.170893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.170920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.171121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.171149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.171341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.171385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.171584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.171630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.171857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.171901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.172092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.172125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.172278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.172304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.172503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.172547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.172773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.785 [2024-07-25 19:18:26.172816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.785 qpair failed and we were unable to recover it. 00:27:33.785 [2024-07-25 19:18:26.172967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.172994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.173190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.173239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.173467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.173512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.173724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.173768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.173965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.173992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.174173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.174220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.174395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.174441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.174669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.174713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.174868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.174895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.175100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.175132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.175303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.175348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.175577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.175621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.175829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.175857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.176037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.176064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.176240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.176286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.176482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.176526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.176732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.176777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.176956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.176983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.177165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.177193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.177409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.177453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.177691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.177735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.177937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.177964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.178174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.178221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.178427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.178471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.178644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.178687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.178912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.178957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.179169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.179197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.179394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.179438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.179611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.179656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.179837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.179880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.180053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.180080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.180319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.180363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.180537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.180581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.180783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.180828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.180990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.181016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.181208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.181253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.181433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.786 [2024-07-25 19:18:26.181479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.786 qpair failed and we were unable to recover it. 00:27:33.786 [2024-07-25 19:18:26.181674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.181719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.181920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.181947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.182110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.182143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.182344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.182389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.182559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.182604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.182831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.182875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.183044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.183070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.183236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.183264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.183490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.183535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.183711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.183755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.183953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.183980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.184196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.184241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.184445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.184493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.184669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.184713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.184901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.184928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.185125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.185182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.185395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.185438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.185639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.185685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.185860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.185887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.186039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.186067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.186289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.186317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.186492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.186537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.186762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.186806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.187008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.187035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.187243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.187288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.187500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.187544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.187775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.187818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.187972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.188000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.188224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.188269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.188467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.188511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.188705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.188749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.188950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.787 [2024-07-25 19:18:26.188977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.787 qpair failed and we were unable to recover it. 00:27:33.787 [2024-07-25 19:18:26.189165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.189196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.189418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.189462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.189685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.189730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.189915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.189942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.190120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.190158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.190355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.190399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.190633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.190678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.190909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.190953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.191152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.191195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.191399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.191443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.191669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.191713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.191890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.191917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.192088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.192126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.192308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.192352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.192578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.192633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.192833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.192880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.193080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.193117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.193336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.193388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.193559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.193606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.193806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.193849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.194051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.194085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.194332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.194377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.194577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.194622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.194848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.194892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.195089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.195124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.195302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.195329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.195523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.195572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.195804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.195857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.196005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.196032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.196252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.196297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.196499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.196544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.196718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.196763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.196944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.196971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.197187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.197233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.197443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.197487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.197696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.197724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.197920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.788 [2024-07-25 19:18:26.197948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.788 qpair failed and we were unable to recover it. 00:27:33.788 [2024-07-25 19:18:26.198097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-07-25 19:18:26.198129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-07-25 19:18:26.198323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-07-25 19:18:26.198372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-07-25 19:18:26.198597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-07-25 19:18:26.198642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-07-25 19:18:26.198816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-07-25 19:18:26.198869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-07-25 19:18:26.199043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-07-25 19:18:26.199070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-07-25 19:18:26.199275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-07-25 19:18:26.199320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-07-25 19:18:26.199526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-07-25 19:18:26.199570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-07-25 19:18:26.199813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-07-25 19:18:26.199843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-07-25 19:18:26.200018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-07-25 19:18:26.200051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-07-25 19:18:26.200328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.789 [2024-07-25 19:18:26.200385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:33.789 qpair failed and we were unable to recover it. 00:27:34.058 [2024-07-25 19:18:26.200622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.058 [2024-07-25 19:18:26.200668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.058 qpair failed and we were unable to recover it. 00:27:34.058 [2024-07-25 19:18:26.200897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.058 [2024-07-25 19:18:26.200942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.058 qpair failed and we were unable to recover it. 00:27:34.058 [2024-07-25 19:18:26.201123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.058 [2024-07-25 19:18:26.201164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.058 qpair failed and we were unable to recover it. 00:27:34.058 [2024-07-25 19:18:26.201332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.058 [2024-07-25 19:18:26.201376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.058 qpair failed and we were unable to recover it. 00:27:34.058 [2024-07-25 19:18:26.201561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.058 [2024-07-25 19:18:26.201605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.058 qpair failed and we were unable to recover it. 00:27:34.058 [2024-07-25 19:18:26.201804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.058 [2024-07-25 19:18:26.201849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.058 qpair failed and we were unable to recover it. 00:27:34.058 [2024-07-25 19:18:26.202048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.058 [2024-07-25 19:18:26.202076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.058 qpair failed and we were unable to recover it. 00:27:34.058 [2024-07-25 19:18:26.202240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.058 [2024-07-25 19:18:26.202267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.058 qpair failed and we were unable to recover it. 00:27:34.058 [2024-07-25 19:18:26.202453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.058 [2024-07-25 19:18:26.202499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.058 qpair failed and we were unable to recover it. 00:27:34.058 [2024-07-25 19:18:26.202724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.058 [2024-07-25 19:18:26.202768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.058 qpair failed and we were unable to recover it. 00:27:34.058 [2024-07-25 19:18:26.202967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.058 [2024-07-25 19:18:26.203009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.058 qpair failed and we were unable to recover it. 00:27:34.058 [2024-07-25 19:18:26.203228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.058 [2024-07-25 19:18:26.203255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.058 qpair failed and we were unable to recover it. 00:27:34.058 [2024-07-25 19:18:26.203451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.058 [2024-07-25 19:18:26.203497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.058 qpair failed and we were unable to recover it. 00:27:34.058 [2024-07-25 19:18:26.203708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.058 [2024-07-25 19:18:26.203757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.058 qpair failed and we were unable to recover it. 00:27:34.058 [2024-07-25 19:18:26.203964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.058 [2024-07-25 19:18:26.203991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.058 qpair failed and we were unable to recover it. 00:27:34.058 [2024-07-25 19:18:26.204201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.058 [2024-07-25 19:18:26.204231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.058 qpair failed and we were unable to recover it. 00:27:34.058 [2024-07-25 19:18:26.204441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.058 [2024-07-25 19:18:26.204486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.058 qpair failed and we were unable to recover it. 00:27:34.058 [2024-07-25 19:18:26.204715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.058 [2024-07-25 19:18:26.204759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.058 qpair failed and we were unable to recover it. 00:27:34.058 [2024-07-25 19:18:26.204961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.058 [2024-07-25 19:18:26.204987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.058 qpair failed and we were unable to recover it. 00:27:34.058 [2024-07-25 19:18:26.205183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.058 [2024-07-25 19:18:26.205212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.058 qpair failed and we were unable to recover it. 00:27:34.058 [2024-07-25 19:18:26.205422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.058 [2024-07-25 19:18:26.205467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.058 qpair failed and we were unable to recover it. 00:27:34.058 [2024-07-25 19:18:26.205656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.058 [2024-07-25 19:18:26.205700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.058 qpair failed and we were unable to recover it. 00:27:34.058 [2024-07-25 19:18:26.205870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.058 [2024-07-25 19:18:26.205905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.058 qpair failed and we were unable to recover it. 00:27:34.058 [2024-07-25 19:18:26.206130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.206163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.206397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.206442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.206619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.206663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.206863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.206890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.207096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.207131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.207330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.207383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.207631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.207675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.207900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.207944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.208149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.208177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.208358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.208403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.208618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.208661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.208886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.208932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.209131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.209171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.209369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.209403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.209616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.209661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.209857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.209901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.210126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.210156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.210340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.210371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.210568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.210611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.210811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.210855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.211007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.211036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.211211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.211256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.211458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.211503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.211682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.211733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.211886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.211914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.212113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.212151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.212317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.212361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.212562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.212607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.212836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.212881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.213090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.213131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.213326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.213379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.213553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.213597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.213815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.213860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.214044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.214072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.214317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.214362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.214562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.214592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.059 qpair failed and we were unable to recover it. 00:27:34.059 [2024-07-25 19:18:26.214784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.059 [2024-07-25 19:18:26.214828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.215006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.215033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.215195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.215239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.215473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.215517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.215715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.215760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.215937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.215967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.216169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.216214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.216410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.216456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.216671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.216699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.216876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.216904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.217042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.217070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.217266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.217311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.217515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.217559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.217764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.217807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.217984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.218011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.218179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.218224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.218425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.218469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.218657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.218702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.218852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.218880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.219059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.219086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.219290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.219334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.219506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.219549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.219754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.219807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.220081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.220115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.220382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.220426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.220631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.220676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.220915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.220959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.221268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.221313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.221554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.221598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.221860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.221905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.222139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.222173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.222448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.222476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.222695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.222739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.222940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.222983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.223185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.223216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.223422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.060 [2024-07-25 19:18:26.223452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.060 qpair failed and we were unable to recover it. 00:27:34.060 [2024-07-25 19:18:26.223670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.223715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.223950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.223995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.224223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.224268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.224476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.224519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.224742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.224788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.224988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.225031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.225245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.225291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.225504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.225547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.225761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.225806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.226002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.226030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.226200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.226244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.226460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.226488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.226705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.226733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.226903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.226929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.227111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.227139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.227315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.227358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.227546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.227591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.227756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.227799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.227981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.228008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.228206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.228252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.228459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.228504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.228668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.228713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.228911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.228939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.229116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.229157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.229364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.229395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.229623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.229667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.229899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.229944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.230149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.230177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.230405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.230456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.230630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.230674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.230852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.230896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.231088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.231122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.231330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.231369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.231542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.061 [2024-07-25 19:18:26.231590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.061 qpair failed and we were unable to recover it. 00:27:34.061 [2024-07-25 19:18:26.231796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.231840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.232015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.232043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.232242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.232286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.232483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.232527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.232749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.232797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.232997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.233024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.233221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.233269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.233458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.233488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.233682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.233726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.233895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.233922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.234073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.234109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.234298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.234342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.234547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.234592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.234817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.234862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.235012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.235039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.235228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.235272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.235462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.235507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.235705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.235749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.235955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.235983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.236178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.236224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.236393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.236438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.236665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.236710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.236905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.236932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.237136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.237186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.237366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.237411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.237618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.237663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.237815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.237842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.238020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.238047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.238273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.238318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.238529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.238573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.238771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.238815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.238988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.239015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.239181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.239227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.239430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.239475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.239698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.239743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.239942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.062 [2024-07-25 19:18:26.239970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.062 qpair failed and we were unable to recover it. 00:27:34.062 [2024-07-25 19:18:26.240142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.240171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.240371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.240415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.240612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.240657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.240875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.240919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.241116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.241157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.241384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.241428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.241605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.241650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.241852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.241896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.242070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.242113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.242320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.242364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.242588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.242631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.242849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.242891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.243043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.243071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.243281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.243328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.243501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.243546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.243741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.243785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.243988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.244015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.244198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.244242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.244414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.244458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.244656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.244701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.244900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.244928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.245109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.245152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.245331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.245376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.245577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.245621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.245822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.245866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.246042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.246069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.246292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.246336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.246593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.246638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.246830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.063 [2024-07-25 19:18:26.246861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.063 qpair failed and we were unable to recover it. 00:27:34.063 [2024-07-25 19:18:26.247049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.247080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.247311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.247341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.247505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.247535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.247702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.247731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.248029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.248087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.248318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.248345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.248598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.248644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.248900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.248954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.249181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.249210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.249371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.249417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.249672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.249702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.249892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.249923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.250093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.250145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.250346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.250374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.250568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.250598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.250816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.250846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.251067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.251098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.251273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.251300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.251526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.251557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.251815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.251873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.252048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.252077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.252256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.252285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.252441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.252484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.252674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.252705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.252871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.252903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.253108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.253147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.253329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.253356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.253582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.253612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.253792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.253834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.254022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.254053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.254284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.254312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.064 [2024-07-25 19:18:26.254520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.064 [2024-07-25 19:18:26.254547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.064 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.254739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.254769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.254931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.254961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.255159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.255187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.255338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.255372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.255523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.255568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.255761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.255792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.255974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.256004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.256177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.256224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.256379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.256407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.256579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.256606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.256826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.256856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.257053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.257080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.257282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.257322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.257540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.257584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.257915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.257985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.258214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.258241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.258420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.258447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.258599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.258626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.258810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.258861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.259051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.259082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.259276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.259313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.259492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.259523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.259841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.259893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.260219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.260248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.260433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.260461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.260743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.260797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.261111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.261167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.261339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.261373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.261597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.261625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.261936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.261990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.262190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.262218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.262409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.262437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.262595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.262623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.262802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.262832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.065 [2024-07-25 19:18:26.263016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.065 [2024-07-25 19:18:26.263047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.065 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.263226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.263254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.263401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.263445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.263667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.263698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.263898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.263926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.264175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.264220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.264454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.264485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.264686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.264714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.264868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.264896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.265048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.265093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.265277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.265305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.265499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.265529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.265852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.265903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.266109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.266136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.266309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.266337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.266547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.266574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.266802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.266829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.267027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.267057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.267257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.267284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.267454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.267481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.267783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.267835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.268054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.268084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.268311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.268339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.268583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.268614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.268967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.269032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.269222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.269248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.269392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.269437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.269622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.269653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.269842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.269869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.270074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.270110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.270329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.270356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.270501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.270530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.270824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.270879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.271067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.271110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.271306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.271333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.271505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.271537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.271854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.271915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.066 [2024-07-25 19:18:26.272110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.066 [2024-07-25 19:18:26.272138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.066 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.272331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.272359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.272531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.272561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.272755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.272782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.272938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.272983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.273203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.273234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.273406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.273432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.273623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.273653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.273839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.273869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.274066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.274093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.274284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.274314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.274526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.274556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.274780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.274807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.274978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.275006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.275154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.275181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.275353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.275382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.275593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.275623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.275806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.275835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.276060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.276087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.276268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.276296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.276464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.276494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.276684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.276712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.276921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.276948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.277095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.277145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.277319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.277347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.277519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.277546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.277746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.277773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.278005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.278033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.278227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.278257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.278446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.278477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.278672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.278699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.278890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.278921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.279110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.279140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.279312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.279339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.279541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.279571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.279756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.279797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.067 qpair failed and we were unable to recover it. 00:27:34.067 [2024-07-25 19:18:26.280019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.067 [2024-07-25 19:18:26.280049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.280228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.280259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.280450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.280480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.280701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.280728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.280931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.280960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.281156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.281186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.281365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.281392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.281538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.281566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.281770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.281800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.281978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.282005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.282171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.282199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.282410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.282439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.282628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.282655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.282847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.282877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.283078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.283116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.283291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.283318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.283516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.283546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.283739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.283768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.283949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.283980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.284186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.284213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.284405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.284431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.284575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.284601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.284798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.284824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.285001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.285031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.285219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.285246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.285442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.285471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.285645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.285672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.285875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.285901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.286094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.286131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.286318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.286347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.068 qpair failed and we were unable to recover it. 00:27:34.068 [2024-07-25 19:18:26.286543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.068 [2024-07-25 19:18:26.286570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.286765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.286794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.286980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.287009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.287196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.287224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.287416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.287446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.287611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.287640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.287830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.287857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.288021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.288051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.288220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.288247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.288422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.288449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.288636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.288665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.288864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.288896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.289114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.289144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.289303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.289329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.289495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.289525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.289714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.289742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.289910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.289941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.290147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.290175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.290325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.290349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.290538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.290567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.290769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.290796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.290965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.290995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.291177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.291205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.291394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.291424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.291601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.291628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.291820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.291850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.292029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.292056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.292232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.292259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.292435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.292462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.292654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.292684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.292874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.292901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.293120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.293150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.293317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.293346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.293536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.293562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.293735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.293762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.293981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.069 [2024-07-25 19:18:26.294011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.069 qpair failed and we were unable to recover it. 00:27:34.069 [2024-07-25 19:18:26.294179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.294207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.294377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.294411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.294568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.294598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.294768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.294795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.294972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.294998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.295204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.295230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.295406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.295432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.295606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.295636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.295804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.295834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.296057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.296084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.296244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.296270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.296462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.296492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.296711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.296738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.296896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.296926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.297089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.297135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.297310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.297337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.297515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.297541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.297744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.297772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.297999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.298028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.298209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.298236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.298434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.298464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.298678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.298705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.298856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.298881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.299080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.299117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.299308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.299334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.299524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.299554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.299730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.299760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.299943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.299973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.300190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.300218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.300357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.300383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.300560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.300587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.300732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.300760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.300904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.300931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.301135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.301180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.070 qpair failed and we were unable to recover it. 00:27:34.070 [2024-07-25 19:18:26.301359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.070 [2024-07-25 19:18:26.301402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.301647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.301677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.301865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.301907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.302109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.302155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.302328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.302355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.302553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.302580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.302742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.302772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.302999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.303032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.303210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.303237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.303414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.303441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.303603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.303630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.303831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.303858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.304068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.304097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.304297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.304325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.304496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.304523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.304671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.304699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.304837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.304862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.305011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.305039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.305212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.305238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.305387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.305414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.305587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.305614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.305774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.305801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.306008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.306038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.306220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.306249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.306467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.306497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.306660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.306690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.306885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.306912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.307064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.307091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.307258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.307285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.307484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.307510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.307674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.307703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.307905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.307932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.308147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.308174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.308375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.071 [2024-07-25 19:18:26.308420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.071 qpair failed and we were unable to recover it. 00:27:34.071 [2024-07-25 19:18:26.308610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.308639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.308815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.308843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.309035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.309064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.309243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.309270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.309447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.309474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.309638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.309665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.309841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.309867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.310072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.310099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.310280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.310310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.310497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.310527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.310693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.310720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.310912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.310942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.311159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.311187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.311362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.311394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.311560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.311589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.311775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.311806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.312007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.312034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.312228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.312258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.312452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.312481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.312670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.312696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.312869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.312896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.313046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.313073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.313259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.313285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.313435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.313461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.313631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.313659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.313803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.313830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.314026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.314055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.314251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.314278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.314423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.314450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.314613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.314644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.314814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.314844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.315035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.315063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.315243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.315270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.315446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.315476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.315646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.315673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.315839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.072 [2024-07-25 19:18:26.315866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.072 qpair failed and we were unable to recover it. 00:27:34.072 [2024-07-25 19:18:26.316055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.316085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.316288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.316314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.316484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.316514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.316705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.316737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.316934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.316963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.317183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.317214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.317437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.317467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.317653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.317681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.317903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.317933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.318113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.318143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.318363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.318389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.318579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.318608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.318800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.318831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.319029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.319056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.319293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.319323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.319517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.319548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.319745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.319772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.319965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.320000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.320198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.320228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.320412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.320438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.320628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.320657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.320879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.320906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.321079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.321112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.321310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.321337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.321504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.321533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.321728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.321754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.321938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.321968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.322153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.322183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.322375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.073 [2024-07-25 19:18:26.322402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.073 qpair failed and we were unable to recover it. 00:27:34.073 [2024-07-25 19:18:26.322563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.322593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.322788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.322817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.322996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.323022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.323170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.323198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.323340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.323366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.323535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.323561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.323753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.323782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.323947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.323977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.324142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.324170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.324316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.324342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.324516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.324543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.324742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.324769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.324969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.324998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.325178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.325207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.325429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.325456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.325627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.325657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.325873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.325900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.326097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.326130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.326324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.326354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.326561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.326591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.326808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.326835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.327010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.327039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.327235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.327262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.327445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.327473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.327637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.327667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.327857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.327887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.328078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.328112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.328295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.328324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.328486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.328519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.328691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.328718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.328866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.328894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.329046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.329072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.329224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.329251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.329439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.329468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.329655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.329685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.329880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.074 [2024-07-25 19:18:26.329906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.074 qpair failed and we were unable to recover it. 00:27:34.074 [2024-07-25 19:18:26.330080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.330126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.330317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.330346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.330537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.330563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.330717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.330743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.330898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.330941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.331198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.331225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.331387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.331414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.331604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.331631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.331836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.331863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.332039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.332066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.332250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.332276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.332446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.332473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.332688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.332717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.332913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.332942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.333129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.333157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.333332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.333360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.333587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.333616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.333789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.333816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.333991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.334017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.334193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.334223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.334389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.334416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.334588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.334617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.334832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.334860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.335059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.335085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.335310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.335340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.335553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.335582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.335781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.335807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.335981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.336010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.336224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.336254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.336448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.336474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.336628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.336654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.336829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.336873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.337070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.337100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.337281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.337309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.337502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.337532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.075 [2024-07-25 19:18:26.337725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.075 [2024-07-25 19:18:26.337753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.075 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.337921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.337950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.338139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.338169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.338360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.338387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.338575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.338605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.338793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.338823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.339075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.339112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.339287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.339313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.339507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.339536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.339707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.339733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.339904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.339931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.340074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.340100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.340305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.340331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.340498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.340528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.340697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.340727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.340940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.340967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.341134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.341165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.341325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.341353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.341514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.341542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.341731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.341760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.341956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.341984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.342183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.342209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.342371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.342400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.342579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.342607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.342843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.342870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.343062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.343092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.343265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.343296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.343458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.343485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.343663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.343690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.343859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.343888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.344083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.344116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.344259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.344285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.344458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.344504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.344724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.344751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.344918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.344948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.345110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.076 [2024-07-25 19:18:26.345139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.076 qpair failed and we were unable to recover it. 00:27:34.076 [2024-07-25 19:18:26.345308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.345334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.345555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.345592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.345788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.345818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.345970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.345997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.346170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.346197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.346388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.346417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.346588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.346615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.346812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.346842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.347047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.347076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.347242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.347269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.347463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.347493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.347702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.347731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.347923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.347950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.348133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.348160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.348354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.348381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.348563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.348590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.348796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.348825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.349016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.349044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.349238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.349265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.349454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.349483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.349640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.349669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.349869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.349895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.350092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.350139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.350299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.350329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.350533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.350559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.350749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.350778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.350946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.350976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.351143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.351170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.351367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.351397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.351581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.351610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.351807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.351833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.351999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.352028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.352187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.352218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.077 qpair failed and we were unable to recover it. 00:27:34.077 [2024-07-25 19:18:26.352390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.077 [2024-07-25 19:18:26.352417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.078 [2024-07-25 19:18:26.352639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.078 [2024-07-25 19:18:26.352669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.078 [2024-07-25 19:18:26.352860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.078 [2024-07-25 19:18:26.352889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.078 [2024-07-25 19:18:26.353082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.078 [2024-07-25 19:18:26.353115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.078 [2024-07-25 19:18:26.353268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.078 [2024-07-25 19:18:26.353295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.078 [2024-07-25 19:18:26.353463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.078 [2024-07-25 19:18:26.353494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.078 [2024-07-25 19:18:26.353688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.078 [2024-07-25 19:18:26.353714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.078 [2024-07-25 19:18:26.353886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.078 [2024-07-25 19:18:26.353911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.078 [2024-07-25 19:18:26.354133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.078 [2024-07-25 19:18:26.354168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.078 [2024-07-25 19:18:26.354362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.078 [2024-07-25 19:18:26.354388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.078 [2024-07-25 19:18:26.354584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.078 [2024-07-25 19:18:26.354613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.078 [2024-07-25 19:18:26.354826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.078 [2024-07-25 19:18:26.354856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.078 [2024-07-25 19:18:26.355021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.078 [2024-07-25 19:18:26.355048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.078 [2024-07-25 19:18:26.355227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.078 [2024-07-25 19:18:26.355255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.078 [2024-07-25 19:18:26.355416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.078 [2024-07-25 19:18:26.355445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.078 [2024-07-25 19:18:26.355663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.078 [2024-07-25 19:18:26.355688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.078 [2024-07-25 19:18:26.355882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.078 [2024-07-25 19:18:26.355911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.078 [2024-07-25 19:18:26.356110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.078 [2024-07-25 19:18:26.356137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.078 [2024-07-25 19:18:26.356281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.078 [2024-07-25 19:18:26.356308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.078 [2024-07-25 19:18:26.356503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.078 [2024-07-25 19:18:26.356529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.078 [2024-07-25 19:18:26.356705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.078 [2024-07-25 19:18:26.356736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.078 [2024-07-25 19:18:26.356934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.078 [2024-07-25 19:18:26.356960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.078 [2024-07-25 19:18:26.357200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.078 [2024-07-25 19:18:26.357227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.078 [2024-07-25 19:18:26.357402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.078 [2024-07-25 19:18:26.357430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.078 [2024-07-25 19:18:26.357627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.078 [2024-07-25 19:18:26.357653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.078 [2024-07-25 19:18:26.357879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.078 [2024-07-25 19:18:26.357908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.078 [2024-07-25 19:18:26.358137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.078 [2024-07-25 19:18:26.358164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.078 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.358322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.358349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.358548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.358578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.358774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.358804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.358974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.359002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.359201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.359231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.359394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.359421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.359593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.359622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.359814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.359844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.360044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.360074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.360294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.360322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.360493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.360519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.360714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.360745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.360916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.360942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.361120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.361147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.361319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.361345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.361523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.361550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.361707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.361733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.361944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.361974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.362142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.362170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.362328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.362354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.362539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.362565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.362763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.362794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.362949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.362976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.363165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.363195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.363373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.363398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.363577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.363604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.363794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.363823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.364016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.364046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.364218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.364245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.364463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.364493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.364685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.364712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.364904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.364933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.365157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.365188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.365387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.365413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.365610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.079 [2024-07-25 19:18:26.365640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.079 qpair failed and we were unable to recover it. 00:27:34.079 [2024-07-25 19:18:26.365837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.365867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.366088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.366126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.366305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.366335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.366517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.366546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.366732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.366759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.366936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.366962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.367134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.367164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.367369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.367395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.367591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.367621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.367815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.367844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.368067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.368093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.368301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.368327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.368518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.368547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.368755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.368783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.369008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.369037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.369219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.369247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.369421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.369449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.369669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.369698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.369859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.369891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.370081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.370116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.370273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.370300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.370494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.370524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.370741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.370768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.370987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.371016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.371215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.371243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.371439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.371466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.371622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.371655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.371846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.371875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.372080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.372116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.372331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.372358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.372571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.372597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.372772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.372800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.372999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.373028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.373218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.373248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.373439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.373465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.373636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.080 [2024-07-25 19:18:26.373667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.080 qpair failed and we were unable to recover it. 00:27:34.080 [2024-07-25 19:18:26.373838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.373867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.374064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.374092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.374327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.374354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.374553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.374595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.374825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.374852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.375028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.375057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.375264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.375291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.375465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.375492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.375648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.375678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.375847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.375885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.376083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.376115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.376310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.376338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.376529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.376558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.376754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.376781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.376998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.377028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.377217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.377248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.377435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.377462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.377642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.377669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.377899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.377928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.378098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.378142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.378329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.378359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.378580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.378609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.378789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.378816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.378984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.379010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.379224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.379254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.379459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.379485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.379677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.379703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.379935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.379964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.380154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.380182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.380362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.380406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.380599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.380633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.380856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.380883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.381057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.381086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.381272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.381299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.081 qpair failed and we were unable to recover it. 00:27:34.081 [2024-07-25 19:18:26.381494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.081 [2024-07-25 19:18:26.381519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.381687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.381716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.381902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.381931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.382117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.382144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.382318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.382344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.382483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.382527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.382747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.382774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.382993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.383022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.383206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.383233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.383386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.383413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.383615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.383644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.383807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.383836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.384030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.384057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.384205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.384233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.384408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.384434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.384658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.384684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.384866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.384894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.385050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.385079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.385272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.385300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.385496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.385526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.385719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.385748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.385965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.385991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.386171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.386200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.386403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.386430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.386597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.386624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.386797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.386823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.387014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.387043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.387226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.387253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.387408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.387435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.387652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.387681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.387850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.387877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.388054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.388080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.388295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.388322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.388524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.388550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-07-25 19:18:26.388752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.082 [2024-07-25 19:18:26.388779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.388984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.389013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.389211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.389245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.389401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.389445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.389659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.389688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.389874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.389900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.390080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.390121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.390304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.390333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.390527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.390555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.390708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.390735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.390952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.390982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.391152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.391180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.391376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.391405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.391606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.391635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.391835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.391861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.392042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.392068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.392295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.392322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.392525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.392551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.392754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.392783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.392999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.393028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.393236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.393263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.393409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.393436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.393603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.393630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.393800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.393827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.393996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.394023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.394178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.394204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.394354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.394380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-07-25 19:18:26.394556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.083 [2024-07-25 19:18:26.394583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.394797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.394826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.394992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.395018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.395236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.395266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.395454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.395483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.395696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.395723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.395916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.395946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.396112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.396143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.396354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.396390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.396594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.396624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.396785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.396815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.397030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.397057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.397206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.397234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.397425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.397454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.397670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.397697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.397915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.397949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.398171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.398201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.398401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.398428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.398624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.398654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.398831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.398858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.398999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.399025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.399212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.399240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.399410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.399440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.399631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.399658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.399872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.399901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.400093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.400129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.400338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.400365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.400533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.400563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.400768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.400795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.400974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.401002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.401173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.401203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.401393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.084 [2024-07-25 19:18:26.401422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.084 qpair failed and we were unable to recover it. 00:27:34.084 [2024-07-25 19:18:26.401642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.401668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.401863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.401892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.402086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.402123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.402299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.402326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.402483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.402509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.402657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.402702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.402887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.402914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.403114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.403144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.403313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.403343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.403565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.403591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.403763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.403793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.403984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.404014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.404187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.404216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.404413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.404442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.404621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.404650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.404826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.404853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.405072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.405109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.405305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.405333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.405551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.405576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.405776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.405806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.406026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.406053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.406227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.406255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.406413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.406439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.406607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.406633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.406789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.406815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.407036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.407066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.407272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.407300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.407471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.407498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.407700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.407729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.407938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.407964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.408165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.408192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.408391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.408421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.408641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.408671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.408895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.408921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.409074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.085 [2024-07-25 19:18:26.409100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.085 qpair failed and we were unable to recover it. 00:27:34.085 [2024-07-25 19:18:26.409250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.409279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.409453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.409481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.409683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.409724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.409883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.409912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.410127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.410155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.410346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.410376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.410583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.410614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.410834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.410862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.411098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.411131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.411332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.411359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.411567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.411594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.411750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.411776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.411927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.411954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.412127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.412155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.412353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.412382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.412539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.412573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.412789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.412816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.412968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.412995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.413172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.413199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.413378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.413404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.413573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.413599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.413777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.413804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.413967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.413997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.414195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.414233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.414424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.414454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.414643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.414671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.414845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.414875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.415095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.415129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.415309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.415334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.415512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.415538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.415757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.415787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.415987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.416014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.416217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.416247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.416465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.416491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.416691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.416718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.086 [2024-07-25 19:18:26.416941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.086 [2024-07-25 19:18:26.416970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.086 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.417127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.417156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.417336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.417363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.417538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.417565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.417760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.417787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.417987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.418012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.418238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.418267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.418441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.418470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.418656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.418682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.418907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.418936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.419156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.419186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.419355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.419393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.419584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.419614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.419834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.419859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.420068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.420094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.420310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.420338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.420542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.420569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.420769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.420795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.420986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.421016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.421235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.421262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.421406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.421437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.421653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.421683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.421875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.421903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.422122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.422159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.422310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.422336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.422513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.422540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.422743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.422770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.422960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.422986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.423162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.423192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.423410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.423436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.423607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.423637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.423822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.423852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.424047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.087 [2024-07-25 19:18:26.424074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.087 qpair failed and we were unable to recover it. 00:27:34.087 [2024-07-25 19:18:26.424258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.424285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.424489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.424518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.424735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.424773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.424969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.424999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.425185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.425211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.425410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.425436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.425643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.425670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.425849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.425875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.426062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.426089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.426300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.426330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.426520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.426549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.426762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.426787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.426999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.427044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.427295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.427325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.427537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.427566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.427762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.427793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.427957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.427999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.428233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.428262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.428459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.428490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.428701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.428731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.428957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.428985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.429152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.429182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.429340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.429380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.429578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.429605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.429780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.429811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.430022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.430066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.430296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.430324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.430540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.430580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.430831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.088 [2024-07-25 19:18:26.430858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.088 qpair failed and we were unable to recover it. 00:27:34.088 [2024-07-25 19:18:26.431032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.431070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.431252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.431279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.431450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.431477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.431669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.431696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.431837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.431862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.432034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.432061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.432275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.432302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.432574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.432622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.432978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.433040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.433235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.433263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.433438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.433481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.433738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.433765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.433965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.434030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.434207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.434234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.434424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.434453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.434650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.434677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.434967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.435029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.435205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.435233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.435401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.435427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.435709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.435765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.435974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.436005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.436226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.436253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.436446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.436476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.436667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.436695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.436899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.436925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.437118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.437164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.437348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.437387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.437550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.437577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.437749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.437776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.437949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.437975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.438121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.438150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.438324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.089 [2024-07-25 19:18:26.438351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.089 qpair failed and we were unable to recover it. 00:27:34.089 [2024-07-25 19:18:26.438742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.438800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.439003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.439030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.439227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.439254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.439441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.439470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.439664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.439691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.439955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.440010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.440241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.440268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.440446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.440474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.440687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.440733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.440912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.440940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.441184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.441211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.441355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.441391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.441536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.441581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.441797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.441824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.442033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.442062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.442247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.442273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.442477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.442503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.442694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.442726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.442948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.442978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.443178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.443205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.443399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.443429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.443720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.443778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.443998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.444025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.444223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.444253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.444459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.444488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.444671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.444698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.444892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.444919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.445083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.445120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.445335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.445361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.445548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.445578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.445890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.445958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.446159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.446186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.446376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.446405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.446616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.446646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.446869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.446896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.447084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.447118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.090 qpair failed and we were unable to recover it. 00:27:34.090 [2024-07-25 19:18:26.447321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.090 [2024-07-25 19:18:26.447350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.447556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.447583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.447773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.447803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.447974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.448003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.448228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.448256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.448405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.448433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.448573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.448616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.448832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.448859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.449029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.449060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.449260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.449290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.449503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.449530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.449685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.449712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.449905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.449935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.450160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.450187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.450339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.450365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.450507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.450534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.450704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.450731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.450948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.450978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.451167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.451197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.451369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.451397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.451612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.451642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.451832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.451862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.452030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.452065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.452226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.452253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.452419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.452446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.452613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.452644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.452816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.452846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.453060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.453089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.453281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.453308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.453504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.453533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.453722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.453752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.453966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.453994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.454214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.454244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.454431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.454460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.454629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.454656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.454854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.454884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.455069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.091 [2024-07-25 19:18:26.455099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.091 qpair failed and we were unable to recover it. 00:27:34.091 [2024-07-25 19:18:26.455276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.455302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.455465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.455492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.455684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.455714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.455911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.455938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.456091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.456135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.456364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.456394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.456582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.456608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.456782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.456809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.457029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.457059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.457293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.457320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.457520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.457550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.457739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.457769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.457986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.458013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.458191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.458221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.458413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.458442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.458659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.458690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.458882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.458912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.459109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.459140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.459337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.459363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.459560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.459589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.459804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.459833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.460031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.460057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.460250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.460277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.460479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.460509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.460701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.460728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.460896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.460925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.461118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.461162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.461361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.461387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.461582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.461612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.461842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.461869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.462070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.462096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.462291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.462320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.462542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.462569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.462734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.462766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.462936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.462966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.463171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.092 [2024-07-25 19:18:26.463201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.092 qpair failed and we were unable to recover it. 00:27:34.092 [2024-07-25 19:18:26.463367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.463394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.463586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.463615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.463834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.463864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.464055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.464082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.464337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.464368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.464583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.464613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.464827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.464858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.465056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.465087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.465287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.465318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.465520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.465546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.465708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.465737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.465941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.465968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.466143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.466170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.466369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.466409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.466574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.466602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.466799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.466827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.466980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.467007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.467184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.467212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.467385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.467412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.467609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.467638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.467849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.467876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.468073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.468109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.468294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.468321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.468526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.468555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.468766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.468793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.469008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.469038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.469254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.469281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.469485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.469512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.093 [2024-07-25 19:18:26.469728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.093 [2024-07-25 19:18:26.469758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.093 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.469927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.469954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.470154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.470180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.470412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.470440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.470609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.470637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.470848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.470873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.471072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.471110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.471296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.471323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.471498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.471532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.471705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.471735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.471930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.471957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.472123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.472157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.472373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.472403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.472584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.472614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.472832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.472859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.473074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.473111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.473331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.473358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.473528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.473566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.473762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.473792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.473984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.474018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.474219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.474246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.474422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.474449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.474648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.474678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.474854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.474880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.475089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.475125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.475267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.475294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.475445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.475484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.475685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.475715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.475875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.475905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.476079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.476116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.476309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.476336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.476512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.476539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.476705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.476732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.476933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.476964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.477157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.477188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.477382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.477409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.477586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.094 [2024-07-25 19:18:26.477613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.094 qpair failed and we were unable to recover it. 00:27:34.094 [2024-07-25 19:18:26.477774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.477801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.477970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.477997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.478213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.478244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.478465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.478495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.478676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.478703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.478877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.478909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.479123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.479154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.479347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.479374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.479572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.479601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.479824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.479856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.480029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.480056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.480253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.480284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.480478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.480508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.480709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.480736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.480929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.480959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.481117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.481158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.481323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.481350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.481536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.481567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.481791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.481818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.481992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.482021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.482227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.482254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.482410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.482437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.482591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.482619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.482823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.482852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.483024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.483053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.483240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.483267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.483485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.483512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.483705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.483731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.483898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.483922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.484144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.484172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.484341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.484368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.484561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.484585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.484815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.484839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.485013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.095 [2024-07-25 19:18:26.485037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.095 qpair failed and we were unable to recover it. 00:27:34.095 [2024-07-25 19:18:26.485210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.485235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.485469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.485497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.485676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.485704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.485875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.485900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.486050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.486076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.486276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.486305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.486501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.486527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.486751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.486780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.486962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.486990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.487164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.487190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.487391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.487419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.487581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.487609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.487802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.487828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.488026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.488054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.488255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.488281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.488435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.488461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.488670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.488712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.488882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.488910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.489100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.489134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.489285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.489310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.489486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.489514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.489688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.489717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.489911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.489939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.490128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.490160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.490332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.490357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.490554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.490581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.490786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.490816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.491036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.491062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.491222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.491250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.491457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.491486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.491685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.491712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.491896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.491925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.492098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.492140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.492309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.492346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.492510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.492539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.492700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.096 [2024-07-25 19:18:26.492730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.096 qpair failed and we were unable to recover it. 00:27:34.096 [2024-07-25 19:18:26.492949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.492975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.493153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.493183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.493368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.493397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.493568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.493595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.493781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.493810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.493969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.493999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.494167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.494194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.494390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.494429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.494628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.494659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.494885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.494913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.495116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.495161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.495338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.495365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.495537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.495565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.495834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.495886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.496084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.496124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.496317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.496343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.496501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.496530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.496725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.496755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.497043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.497070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.497302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.497330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.497481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.497513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.497662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.497689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.497860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.497948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.498151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.498179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.498327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.498355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.498498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.498525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.498681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.498707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.498880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.498907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.499109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.499136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.499312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.499339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.499533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.499560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.499835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.499892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.500086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.500122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.500309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.500336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.097 [2024-07-25 19:18:26.500504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.097 [2024-07-25 19:18:26.500535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.097 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.500718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.500747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.500907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.500933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.501111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.501138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.501291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.501317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.501488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.501514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.501850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.501909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.502188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.502215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.502370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.502398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.502590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.502619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.502834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.502863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.503087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.503118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.503295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.503321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.503523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.503554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.503750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.503777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.503946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.503976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.504179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.504206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.504352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.504378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.504532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.504558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.504774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.504803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.504979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.505008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.505211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.505238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.505435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.505461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.505611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.505638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.505830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.505861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.506054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.506083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.506280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.506312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.506577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.506608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.098 [2024-07-25 19:18:26.506794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.098 [2024-07-25 19:18:26.506825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.098 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.507081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.507113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.507290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.507316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.507488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.507515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.507765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.507792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.507989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.508018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.508220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.508247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.508415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.508442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.508614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.508641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.508786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.508814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.509015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.509042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.509211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.509239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.509437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.509466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.509656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.509682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.509857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.509885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.510073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.510109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.510287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.510315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.510510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.510537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.510725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.510755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.510975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.511005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.511206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.511233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.511409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.511438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.511608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.511634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.511824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.511854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.512013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.512045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.512225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.512253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.512469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.512498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.512664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.512694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.512890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.512916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.513135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.513164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.513377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.513406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.513603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.513632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.513860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.513892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.514087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.514124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.514303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.099 [2024-07-25 19:18:26.514330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.099 qpair failed and we were unable to recover it. 00:27:34.099 [2024-07-25 19:18:26.514485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.100 [2024-07-25 19:18:26.514513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.100 qpair failed and we were unable to recover it. 00:27:34.100 [2024-07-25 19:18:26.514685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.100 [2024-07-25 19:18:26.514716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.100 qpair failed and we were unable to recover it. 00:27:34.100 [2024-07-25 19:18:26.514884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.100 [2024-07-25 19:18:26.514910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.100 qpair failed and we were unable to recover it. 00:27:34.100 [2024-07-25 19:18:26.515083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.100 [2024-07-25 19:18:26.515117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.100 qpair failed and we were unable to recover it. 00:27:34.100 [2024-07-25 19:18:26.515351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.100 [2024-07-25 19:18:26.515393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.100 qpair failed and we were unable to recover it. 00:27:34.100 [2024-07-25 19:18:26.515597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.100 [2024-07-25 19:18:26.515625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.100 qpair failed and we were unable to recover it. 00:27:34.100 [2024-07-25 19:18:26.515802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.100 [2024-07-25 19:18:26.515829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.100 qpair failed and we were unable to recover it. 00:27:34.100 [2024-07-25 19:18:26.516042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.100 [2024-07-25 19:18:26.516077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.100 qpair failed and we were unable to recover it. 00:27:34.100 [2024-07-25 19:18:26.516276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.100 [2024-07-25 19:18:26.516303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.100 qpair failed and we were unable to recover it. 00:27:34.100 [2024-07-25 19:18:26.516523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.100 [2024-07-25 19:18:26.516552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.100 qpair failed and we were unable to recover it. 00:27:34.100 [2024-07-25 19:18:26.516729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.100 [2024-07-25 19:18:26.516765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.100 qpair failed and we were unable to recover it. 00:27:34.100 [2024-07-25 19:18:26.516959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.100 [2024-07-25 19:18:26.516993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.100 qpair failed and we were unable to recover it. 00:27:34.100 [2024-07-25 19:18:26.517172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.100 [2024-07-25 19:18:26.517199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.100 qpair failed and we were unable to recover it. 00:27:34.376 [2024-07-25 19:18:26.517371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.517402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.517580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.517607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.517805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.517834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.518048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.518078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.518289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.518316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.518486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.518517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.518717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.518754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.518984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.519015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.519233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.519271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.519444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.519485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.519686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.519715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.519863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.519891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.520058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.520088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.520310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.520338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.520586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.520615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.520772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.520808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.520983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.521011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.521206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.521243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.521451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.521480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.521653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.521682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.521833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.521861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.522059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.522108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.522285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.522313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.522490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.522517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.522697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.522723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.522874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.522900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.523088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.523124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.523321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.523350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.523518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.523545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.523735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.523765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.523961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.523992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-07-25 19:18:26.524196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.377 [2024-07-25 19:18:26.524223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.524393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.524439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.524619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.524645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.524791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.524818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.525008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.525039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.525257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.525288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.525481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.525508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.525686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.525715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.525902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.525933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.526118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.526147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.526318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.526348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.526537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.526567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.526732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.526759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.526935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.526966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.527178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.527208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.527376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.527403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.527600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.527630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.527849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.527875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.528062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.528089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.528265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.528292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.528465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.528492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.528667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.528693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.528844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.528870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.529057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.529083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.529247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.529274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.529442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.529469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.529642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.529673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.529869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.529899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.530062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.530092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.530267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.530297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.530479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.530520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.530723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.530769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.530994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.531038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.531213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.531241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.531384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.531411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.531602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.531631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.531796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.378 [2024-07-25 19:18:26.531841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.378 qpair failed and we were unable to recover it. 00:27:34.378 [2024-07-25 19:18:26.531990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.532018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.532210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.532257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.532492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.532543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.532749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.532795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.532947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.532974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.533128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.533156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.533329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.533376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.533589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.533633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.533859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.533904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.534088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.534125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.534318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.534363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.534607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.534656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.534830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.534875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.535051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.535078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.535297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.535342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.535506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.535536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.535732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.535761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.535926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.535956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.536127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.536155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.536323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.536355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.536769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.536823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.537034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.537064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.537257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.537284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.537483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.537512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.537705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.537735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.537908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.537937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.538132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.538163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.538333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.538380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.538609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.538652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.539023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.539094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.539311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.539357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.539562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.539605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.539829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.539857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.540036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.540069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.540261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.540309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.540588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.540644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.379 [2024-07-25 19:18:26.540798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.379 [2024-07-25 19:18:26.540826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.379 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.541021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.541049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.541278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.541324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.541561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.541605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.541785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.541837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.541990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.542019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.542226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.542271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.542476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.542522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.542763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.542812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.543006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.543033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.543208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.543253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.543444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.543489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.543665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.543712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.543869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.543896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.544046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.544073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.544281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.544327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.544525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.544571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.544728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.544773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.544945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.544973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.545170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.545201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.545395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.545440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.545610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.545654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.545861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.545906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.546055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.546082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.546274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.546322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.546498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.546544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.546774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.546823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.546971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.546999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.547195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.547240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.547440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.547484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.547683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.547713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.547895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.547922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.548096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.548137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.548367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.548417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.548593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.548638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.548814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.548859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.549011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.549038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.380 qpair failed and we were unable to recover it. 00:27:34.380 [2024-07-25 19:18:26.549213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.380 [2024-07-25 19:18:26.549258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.549458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.549503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.549727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.549771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.549944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.549971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.550121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.550149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.550358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.550403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.550578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.550622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.550796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.550840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.551011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.551038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.551213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.551259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.551459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.551505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.551731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.551776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.551925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.551952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.552137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.552182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.552349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.552395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.552598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.552642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.552791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.552818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.552969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.552996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.553188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.553234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.553406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.553452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.553642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.553687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.553839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.553867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.554019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.554046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.554246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.554292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.554472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.554517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.554710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.554755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.555012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.555039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.381 [2024-07-25 19:18:26.555235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.381 [2024-07-25 19:18:26.555280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.381 qpair failed and we were unable to recover it. 00:27:34.382 [2024-07-25 19:18:26.555460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.382 [2024-07-25 19:18:26.555504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.382 qpair failed and we were unable to recover it. 00:27:34.382 [2024-07-25 19:18:26.555740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.382 [2024-07-25 19:18:26.555783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.382 qpair failed and we were unable to recover it. 00:27:34.382 [2024-07-25 19:18:26.555983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.382 [2024-07-25 19:18:26.556010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.382 qpair failed and we were unable to recover it. 00:27:34.382 [2024-07-25 19:18:26.556185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.382 [2024-07-25 19:18:26.556232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.382 qpair failed and we were unable to recover it. 00:27:34.382 [2024-07-25 19:18:26.556411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.382 [2024-07-25 19:18:26.556456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.382 qpair failed and we were unable to recover it. 00:27:34.382 [2024-07-25 19:18:26.556663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.382 [2024-07-25 19:18:26.556707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.382 qpair failed and we were unable to recover it. 00:27:34.382 [2024-07-25 19:18:26.556880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.382 [2024-07-25 19:18:26.556908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.382 qpair failed and we were unable to recover it. 00:27:34.382 [2024-07-25 19:18:26.557128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.382 [2024-07-25 19:18:26.557156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.382 qpair failed and we were unable to recover it. 00:27:34.382 [2024-07-25 19:18:26.557352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.382 [2024-07-25 19:18:26.557402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.382 qpair failed and we were unable to recover it. 00:27:34.382 [2024-07-25 19:18:26.557604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.382 [2024-07-25 19:18:26.557648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.382 qpair failed and we were unable to recover it. 00:27:34.382 [2024-07-25 19:18:26.557851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.382 [2024-07-25 19:18:26.557879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.382 qpair failed and we were unable to recover it. 00:27:34.382 [2024-07-25 19:18:26.558030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.382 [2024-07-25 19:18:26.558058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.382 qpair failed and we were unable to recover it. 00:27:34.382 [2024-07-25 19:18:26.558291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.382 [2024-07-25 19:18:26.558335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.382 qpair failed and we were unable to recover it. 00:27:34.382 [2024-07-25 19:18:26.558533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.558578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.558804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.558848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.559055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.559082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.559258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.559304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.559501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.559546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.559769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.559813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.559961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.559989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.560181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.560226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.560461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.560506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.560743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.560788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.560941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.560972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.561160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.561188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.561366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.561393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.561616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.561660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.561813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.561841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.562014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.562042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.562243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.562289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.562507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.562551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.562749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.562794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.562992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.563020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.563208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.563254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.563457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.563501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.563710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.563754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.563906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.563934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.564151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.564184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.564400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.564444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.564670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.564716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.564891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.564919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.565121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.565149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.565328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.565373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.565600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.565643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.565865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.565910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.566079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.566114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.566281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.566308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.566488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.383 [2024-07-25 19:18:26.566532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.383 qpair failed and we were unable to recover it. 00:27:34.383 [2024-07-25 19:18:26.566735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.566785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.566978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.567005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.567192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.567220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.567388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.567432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.567661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.567705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.567913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.567941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.568095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.568143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.568294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.568323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.568496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.568550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.568742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.568787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.568931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.568958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.569184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.569230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.569432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.569477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.569659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.569703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.569882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.569910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.570057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.570084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.570293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.570337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.570565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.570609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.570816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.570861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.571033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.571060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.571260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.571306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.571463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.571491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.571711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.571757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.571943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.571970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.572125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.572154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.572378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.572423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.572633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.572661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.572843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.572871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.573042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.573069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.573265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.573310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.573534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.573578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.573783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.573826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.574024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.574051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.574229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.574274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.574466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.574511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.574736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.384 [2024-07-25 19:18:26.574780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.384 qpair failed and we were unable to recover it. 00:27:34.384 [2024-07-25 19:18:26.574958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.574987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.575205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.575250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.575422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.575468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.575668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.575712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.575886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.575921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.576075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.576113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.576302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.576348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.576547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.576592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.576760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.576804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.576956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.576984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.577168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.577214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.577415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.577458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.577687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.577731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.577906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.577933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.578110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.578139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.578323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.578368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.578555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.578583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.578815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.578859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.579037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.579064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.579247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.579291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.579493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.579538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.579765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.579808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.580011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.580039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.580219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.580266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.580440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.580486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.580687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.580731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.580895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.580922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.581129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.581157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.581360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.581405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.581605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.581649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.581856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.581884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.582090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.582125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.582300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.582328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.582498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.582542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.385 [2024-07-25 19:18:26.582737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.385 [2024-07-25 19:18:26.582782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.385 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.582957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.582984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.583131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.583160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.583354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.583399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.583595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.583639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.583828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.583872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.584044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.584072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.584283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.584328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.584554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.584599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.584828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.584873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.585029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.585059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.585275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.585321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.585512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.585556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.585768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.585812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.585984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.586012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.586209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.586263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.586466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.586512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.586690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.586735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.586934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.586961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.587175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.587206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.587419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.587463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.587638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.587683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.587854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.587882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.588024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.588052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.588291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.588337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.588518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.588563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.588761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.588807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.588981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.589009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.589182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.589227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.589432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.589476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.589702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.589746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.589913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.589940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.590117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.590146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.386 qpair failed and we were unable to recover it. 00:27:34.386 [2024-07-25 19:18:26.590365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.386 [2024-07-25 19:18:26.590410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.590611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.590654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.590837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.590866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.591044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.591077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.591277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.591322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.591486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.591531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.591727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.591771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.591912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.591939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.592114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.592146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.592334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.592379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.592567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.592611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.592847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.592891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.593033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.593060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.593281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.593325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.593526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.593570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.593741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.593786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.593960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.593988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.594184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.594238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.594458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.594502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.594704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.594748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.594922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.594950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.595125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.595152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.595348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.595396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.595598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.595643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.595872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.595916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.596127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.596156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.596373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.596417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.596641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.596687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.596910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.387 [2024-07-25 19:18:26.596955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.387 qpair failed and we were unable to recover it. 00:27:34.387 [2024-07-25 19:18:26.597112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.597140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.597310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.597354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.597588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.597632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.597852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.597895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.598058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.598085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.598270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.598314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.598543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.598588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.598786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.598830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.599027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.599054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.599256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.599303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.599500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.599544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.599768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.599812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.599965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.599993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.600212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.600259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.600406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.600435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.600666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.600711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.600914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.600942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.601123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.601151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.601373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.601417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.601645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.601691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.601913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.601958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.602141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.602179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.602374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.602422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.602618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.602663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.602881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.602924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.603125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.603153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.603358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.603387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.603603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.603647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.603870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.603918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.604097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.604139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.604342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.604387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.604588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.604633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.604857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.604900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.605075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.605117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.388 [2024-07-25 19:18:26.605346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.388 [2024-07-25 19:18:26.605391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.388 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.605590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.605635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.605830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.605874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.606053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.606081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.606267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.606311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.606510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.606552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.606753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.606797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.606974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.607001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.607226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.607271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.607450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.607496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.607724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.607769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.607944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.607973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.608158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.608191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.608402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.608446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.608625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.608669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.608849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.608877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.609049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.609076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.609263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.609308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.609510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.609553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.609739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.609783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.609980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.610008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.610200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.610245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.610444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.610487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.610706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.610749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.610925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.610952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.611172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.611218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.611437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.611481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.611684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.611728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.611926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.611953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.612176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.612221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.612442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.612487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.612688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.612733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.612905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.612932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.613135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.613163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.613364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.389 [2024-07-25 19:18:26.613399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.389 qpair failed and we were unable to recover it. 00:27:34.389 [2024-07-25 19:18:26.613613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.613658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.613885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.613931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.614111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.614140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.614344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.614389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.614591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.614636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.614856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.614899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.615075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.615111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.615348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.615378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.615627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.615671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.615907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.615951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.616124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.616170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.616350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.616394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.616598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.616642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.616849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.616892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.617091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.617125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.617325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.617371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.617591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.617634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.617855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.617899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.618108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.618136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.618289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.618316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.618515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.618560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.618786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.618830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.619028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.619055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.619224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.619252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.619447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.619493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.619726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.619771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.619951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.619979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.620191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.620245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.620436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.620480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.620702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.620747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.620920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.620957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.621129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.621158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.621335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.621380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.621602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.621647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.621819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.621847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.622021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.390 [2024-07-25 19:18:26.622048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.390 qpair failed and we were unable to recover it. 00:27:34.390 [2024-07-25 19:18:26.622231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.622276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.622479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.622524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.622732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.622776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.622951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.622986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.623208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.623252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.623453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.623498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.623697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.623742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.623938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.623965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.624138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.624167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.624363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.624412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.624590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.624634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.624806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.624834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.625002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.625030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.625200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.625246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.625446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.625491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.625687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.625717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.625901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.625929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.626071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.626098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.626291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.626337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.626539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.626583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.626774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.626819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.626995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.627022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.627217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.627261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.627468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.627496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.627672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.627716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.627914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.627942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.628092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.628135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.628319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.628363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.628551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.628596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.628777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.628822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.629023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.629052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.629276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.629321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.629488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.629531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.629733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.629777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.629950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.629989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.391 [2024-07-25 19:18:26.630208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.391 [2024-07-25 19:18:26.630253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.391 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.630423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.630469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.630671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.630716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.630892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.630920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.631090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.631125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.631302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.631348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.631577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.631621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.631829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.631873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.632022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.632050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.632274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.632320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.632519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.632564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.632765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.632809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.633007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.633034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.633231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.633276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.633538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.633583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.633785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.633829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.634007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.634034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.634204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.634249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.634486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.634530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.634755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.634801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.634996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.635030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.635250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.635294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.635569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.635613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.635839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.635884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.636057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.636085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.636365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.636410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.636637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.636680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.636853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.636897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.637072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.637100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.637348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.637393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.637627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.392 [2024-07-25 19:18:26.637670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.392 qpair failed and we were unable to recover it. 00:27:34.392 [2024-07-25 19:18:26.637892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.637936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.638118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.638147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.638378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.638421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.638592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.638642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.638832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.638870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.639144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.639181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.639431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.639477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.639685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.639728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.639880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.639909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.640113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.640152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.640318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.640362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.640533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.640576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.640776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.640821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.641036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.641063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.641249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.641275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.641480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.641524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.641701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.641747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.641976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.642003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.642218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.642264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.642476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.642502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.642730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.642774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.643044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.643070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.643302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.643345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.643548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.643592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.643766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.643794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.643985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.644011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.644206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.644253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.644439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.644485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.644725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.644770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.644992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.645019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.645252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.645296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.645542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.645572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.645830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.645875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.646038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.646065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.393 [2024-07-25 19:18:26.646279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.393 [2024-07-25 19:18:26.646323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.393 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.646511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.646556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.646747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.646792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.646990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.647032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.647238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.647284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.647498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.647541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.647708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.647752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.647951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.647977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.648201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.648246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.648610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.648655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.648852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.648900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.649080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.649121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.649324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.649369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.649540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.649585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.649847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.649891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.650086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.650121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.650301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.650329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.650525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.650572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.650752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.650796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.650969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.650996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.651173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.651201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.651396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.651445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.651671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.651715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.651900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.651927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.652114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.652146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.652345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.652376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.652558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.652602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.652776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.652820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.652992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.653019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.653216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.653264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.653486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.653532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.653738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.653766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.653944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.653972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.654130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.654158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.654339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.654383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.394 [2024-07-25 19:18:26.654607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.394 [2024-07-25 19:18:26.654637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.394 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.654827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.654854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.655025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.655052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.655256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.655301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.655531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.655575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.655754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.655799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.655996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.656024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.656198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.656243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.656474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.656519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.656715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.656759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.657016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.657042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.657266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.657311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.657534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.657578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.657802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.657845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.658024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.658053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.658198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.658230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.658404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.658449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.658649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.658705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.658911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.658938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.659131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.659159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.659348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.659394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.659583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.659614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.659808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.659835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.660035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.660062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.660231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.660277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.660499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.660543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.660723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.660767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.660912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.660941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.661122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.661150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.661387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.661432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.661656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.661701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.661850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.661877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.662074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.662108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.662292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.662320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.662520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.395 [2024-07-25 19:18:26.662565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.395 qpair failed and we were unable to recover it. 00:27:34.395 [2024-07-25 19:18:26.662797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.662840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.663036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.663062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.663270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.663298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.663498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.663545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.663789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.663833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.664088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.664129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.664292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.664320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.664518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.664562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.664762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.664807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.665015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.665043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.665220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.665248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.665425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.665469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.665659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.665702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.665885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.665913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.666112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.666140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.666332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.666380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.666549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.666598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.666791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.666836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.667034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.667061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.667267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.667295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.667466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.667516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.667711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.667756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.667933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.667961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.668146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.668197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.668394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.668437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.668636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.668679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.668825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.668853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.669013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.669041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.669206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.669252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.669483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.669528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.669699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.669743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.669917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.669951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.670128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.670158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.670351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.670397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.670602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.396 [2024-07-25 19:18:26.670646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.396 qpair failed and we were unable to recover it. 00:27:34.396 [2024-07-25 19:18:26.670839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.670866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.671019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.671061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.671288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.671334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.671594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.671642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.671858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.671900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.672073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.672128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.672366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.672412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.672606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.672636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.672829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.672874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.673052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.673079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.673346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.673391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.673570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.673597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.673821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.673865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.674072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.674099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.674276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.674321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.674536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.674580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.674868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.674912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.675108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.675135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.675373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.675418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.675655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.675700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.675888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.675915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.676091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.676134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.676341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.676385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.676609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.676654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.676901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.676929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.677131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.677163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.677421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.397 [2024-07-25 19:18:26.677466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-07-25 19:18:26.677782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.677812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.678028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.678070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.678286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.678331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.678528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.678573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.678762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.678792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.679084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.679118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.679320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.679364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.679543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.679587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.679769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.679814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.680010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.680052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.680283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.680329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.680574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.680618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.680909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.680953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.681128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.681156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.681382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.681412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.681604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.681647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.681887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.681932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.682126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.682154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.682359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.682404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.682618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.682663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.682872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.682915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.683112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.683154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.683353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.683398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.683626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.683671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.683888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.683931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.684129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.684158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.684383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.684427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.684599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.684643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.684867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.684911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.685070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.685096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.685403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.685448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.685696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.685741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.685932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.685977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 [2024-07-25 19:18:26.686183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.398 [2024-07-25 19:18:26.686214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.686448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.686492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.686735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.686779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.686986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.687013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.687232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.687277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.687445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.687493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.687690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.687735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.687927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.687954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.688108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.688140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.688364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.688407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.688717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.688762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.688946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.688973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.689203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.689249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.689427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.689474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.689672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.689718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.689896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.689924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.690089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.690126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.690337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.690381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.690576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.690621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.690892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.690938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.691115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.691144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.691340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.691384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.691591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.691635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.691835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.691865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.692057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.692084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.692286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.692332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.692525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.692570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.692793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.692837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.693037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.693065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.693263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.693309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.693510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.693555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.693781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.693825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.693978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.694006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.694224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.694271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.694489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.694535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.399 qpair failed and we were unable to recover it. 00:27:34.399 [2024-07-25 19:18:26.694749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.399 [2024-07-25 19:18:26.694792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.694975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.695003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.695190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.695235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.695458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.695503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.695699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.695742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.695941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.695968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.696158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.696190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.696475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.696519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.696717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.696762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.696940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.696967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.697194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.697243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.697444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.697488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.697712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.697757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.697958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.697986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.698171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.698217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.698424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.698469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.698690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.698733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.698934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.698961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.699134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.699172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.699378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.699423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.699646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.699689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.699889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.699916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.700085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.700126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.700336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.700381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.700584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.700629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.700853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.700898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.701074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.701113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.701310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.701355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.701543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.701571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.701794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.701839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.702045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.702073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.400 qpair failed and we were unable to recover it. 00:27:34.400 [2024-07-25 19:18:26.702310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.400 [2024-07-25 19:18:26.702355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.702522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.702567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.702759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.702804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.702973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.703000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.703163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.703210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.703413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.703458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.703680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.703725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.704077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.704151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.704355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.704387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.704553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.704584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.704775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.704807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.705011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.705039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.705229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.705256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.705427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.705457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.705682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.705711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.706023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.706075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.706303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.706331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.706559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.706589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.706951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.707008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.707219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.707245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.707444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.707475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.707674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.707704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.707906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.707949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.708120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.708147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.708343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.708370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.708594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.708623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.708843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.708873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.709093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.709127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.709308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.709335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.709510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.709538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.709735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.709765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.709956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.709987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.710173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.710201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.710383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.710410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.710603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.710632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.710845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.401 [2024-07-25 19:18:26.710875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.401 qpair failed and we were unable to recover it. 00:27:34.401 [2024-07-25 19:18:26.711150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.711177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.711349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.711376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.711544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.711574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.711788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.711818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.712031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.712061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.712278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.712305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.712476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.712503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.712665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.712694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.712888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.712917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.713083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.713115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.713277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.713302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.713484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.713516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.713852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.713908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.714099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.714133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.714322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.714350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.714546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.714573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.714784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.714814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.715027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.715057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.715258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.715285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.715478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.715508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.715698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.715729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.716095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.716158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.716372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.716399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.716623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.716653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.716942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.717005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.717201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.717228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.717379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.717423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.717619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.717646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.717831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.717860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.402 qpair failed and we were unable to recover it. 00:27:34.402 [2024-07-25 19:18:26.718077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.402 [2024-07-25 19:18:26.718113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.718284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.718311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.718500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.718529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.718716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.718745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.719163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.719190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.719362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.719406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.719624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.719655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.719894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.719945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.720173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.720200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.720381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.720425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.720643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.720670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.720839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.720877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.721092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.721128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.721318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.721345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.721539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.721569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.721756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.721786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.722036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.722066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.722274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.722301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.722491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.722521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.722786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.722837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.723037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.723064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.723273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.723301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.723471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.723520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.723712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.723741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.723930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.723960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.724155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.724183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.724380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.724425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.724649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.724679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.724924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.724954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.725180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.725208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.725383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.725427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.725617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.725643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.725817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.725844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.726011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.726038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.726212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.726240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.726410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.726440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.403 [2024-07-25 19:18:26.726611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.403 [2024-07-25 19:18:26.726641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.403 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.726856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.726883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.727067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.727097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.727323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.727350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.727546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.727573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.727785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.727812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.727985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.728012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.728169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.728197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.728393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.728420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.728596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.728626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.728843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.728870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.729077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.729127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.729292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.729321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.729542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.729569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.729766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.729796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.729979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.730008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.730232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.730259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.730431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.730461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.730638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.730664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.730838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.730865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.731058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.731087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.731282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.731312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.731509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.731537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.731726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.731755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.731958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.731988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.732171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.732200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.732356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.732382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.732645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.732690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.732905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.732935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.733131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.733173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.733342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.733371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.733537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.733563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.404 [2024-07-25 19:18:26.733756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.404 [2024-07-25 19:18:26.733786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.404 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.734003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.734031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.734232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.734261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.734449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.734479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.734731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.734760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.734958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.734984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.735154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.735182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.735344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.735371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.735510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.735542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.735719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.735746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.735950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.735979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.736176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.736203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.736356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.736382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.736567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.736596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.736793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.736820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.736983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.737012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.737231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.737275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.737448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.737476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.737656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.737683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.737828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.737854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.738027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.738054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.738292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.738322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.738710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.738769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.738985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.739012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.739210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.739239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.739486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.739538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.739732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.739759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.739910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.739936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.740149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.740176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.740323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.740359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.740538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.740564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.740819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.740876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.741092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.741128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.741304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.741333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.741523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.741551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.741754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.741786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.405 [2024-07-25 19:18:26.741978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.405 [2024-07-25 19:18:26.742007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.405 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.742202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.742230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.742419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.742445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.742663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.742691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.742962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.742991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.743187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.743213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.743360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.743403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.743742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.743803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.744027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.744053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.744265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.744292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.744471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.744497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.744648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.744675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.744895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.744924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.745143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.745173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.745393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.745419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.745612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.745641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.745956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.746020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.746194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.746221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.746409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.746438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.746636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.746663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.746833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.746860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.747039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.747066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.747265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.747296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.747480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.747507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.747699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.747729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.747890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.747920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.748118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.748152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.748331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.748363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.748557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.748587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.748781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.748808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.748992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.749022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.749272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.749313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.749492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.749522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.749723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.749754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.406 [2024-07-25 19:18:26.749911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.406 [2024-07-25 19:18:26.749941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.406 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.750198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.750226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.750395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.750425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.750758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.750808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.751007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.751034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.751211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.751239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.751446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.751477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.751672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.751705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.751853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.751880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.752067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.752096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.752296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.752323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.752502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.752529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.752678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.752706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.752882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.752908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.753182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.753210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.753427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.753456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.753649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.753676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.753898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.753927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.754130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.754175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.754369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.754400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.754662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.754691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.755049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.755114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.755285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.755313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.755531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.755561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.755803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.755832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.756025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.756051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.756255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.756281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.756480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.756510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.756680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.756706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.756877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.407 [2024-07-25 19:18:26.756903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.407 qpair failed and we were unable to recover it. 00:27:34.407 [2024-07-25 19:18:26.757126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.757169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.757318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.757345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.757533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.757562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.757842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.757894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.758112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.758139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.758344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.758373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.758560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.758588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.758800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.758827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.759043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.759072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.759239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.759268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.759455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.759482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.759699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.759728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.759958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.760005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.760200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.760227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.760383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.760410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.760579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.760608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.760793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.760819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.761004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.761033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.761213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.761240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.761379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.761406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.761583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.761609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.761808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.761837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.762015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.762043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.762225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.762255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.762439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.762466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.762638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.762665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.762859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.762889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.763056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.763084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.763279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.763305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.763463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.763494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.763672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.763698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.763871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.763897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.764107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.764136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.764325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.764351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.764519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.408 [2024-07-25 19:18:26.764546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.408 qpair failed and we were unable to recover it. 00:27:34.408 [2024-07-25 19:18:26.764738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.764767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.764956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.764985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.765175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.765202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.765406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.765434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.765595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.765624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.765841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.765867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.766020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.766046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.766242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.766272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.766472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.766499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.766693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.766723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.766917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.766946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.767140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.767167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.767317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.767343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.767515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.767541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.767683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.767711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.767902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.767932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.768129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.768156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.768329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.768355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.768570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.768599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.768794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.768822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.769003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.769029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.769207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.769237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.769438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.769466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.769650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.769676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.769863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.769891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.770120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.770164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.770338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.770364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.770514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.770540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.770741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.770769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.770950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.770976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.771193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.771224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.771386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.771416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.771616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.771642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.771857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.771886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.772082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.772118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.409 [2024-07-25 19:18:26.772295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.409 [2024-07-25 19:18:26.772321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.409 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.772514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.772542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.772733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.772761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.772952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.772978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.773157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.773184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.773330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.773356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.773540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.773565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.773798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.773823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.773982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.774008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.774156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.774183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.774354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.774380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.774586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.774615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.774784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.774811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.775004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.775032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.775223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.775252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.775422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.775449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.775625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.775651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.775800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.775827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.776053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.776082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.776265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.776293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.776437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.776463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.776635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.776661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.776854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.776883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.777067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.777097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.777299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.777325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.777471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.777497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.777676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.777703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.777879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.777905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.778080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.778112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.778267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.778293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.778434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.778460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.778635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.778661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.778827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.778853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.779022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.779048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.779218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.779247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.779413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.779443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.410 [2024-07-25 19:18:26.779616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.410 [2024-07-25 19:18:26.779642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.410 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.779840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.779868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.780064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.780093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.780287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.780318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.780468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.780494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.780643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.780669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.780841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.780866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.781081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.781121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.781299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.781325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.781473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.781499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.781652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.781679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.781882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.781910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.782077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.782112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.782301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.782330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.782542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.782571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.782752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.782778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.782948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.782978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.783153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.783183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.783347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.783374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.783565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.783595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.783801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.783827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.784001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.784027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.784192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.784219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.784370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.784397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.784577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.784603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.784802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.784831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.785013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.785043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.785240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.785267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.785418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.785445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.785592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.785620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.785792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.785819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.785984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.786013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.786185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.411 [2024-07-25 19:18:26.786214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.411 qpair failed and we were unable to recover it. 00:27:34.411 [2024-07-25 19:18:26.786431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.786457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.786635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.786664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.786817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.786846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.787031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.787057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.787255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.787284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.787473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.787502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.787699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.787726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.787887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.787917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.788112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.788142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.788335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.788361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.788557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.788590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.788790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.788819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.789010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.789041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.789221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.789247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.789419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.789446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.789601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.789628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.789806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.789835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.790088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.790123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.790340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.790366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.790527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.790556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.790714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.790745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.790921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.790947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.791138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.791168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.791354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.791383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.791585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.791612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.791800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.791831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.792024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.792053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.792228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.792255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.792456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.792485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.792679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.412 [2024-07-25 19:18:26.792705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.412 qpair failed and we were unable to recover it. 00:27:34.412 [2024-07-25 19:18:26.792877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.792904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.793097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.793138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.793327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.793357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.793554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.793580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.793756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.793786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.793941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.793970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.794137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.794164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.794341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.794368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.794526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.794552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.794722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.794748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.794921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.794950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.795112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.795142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.795308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.795334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.795497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.795528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.795747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.795774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.795974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.796004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.796234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.796261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.796436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.796462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.796633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.796659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.796855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.796886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.797077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.797137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.797316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.797342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.797558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.797587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.797792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.797819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.797987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.798014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.798202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.798232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.798400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.798429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.798620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.798646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.798799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.798825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.799037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.799066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.799265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.799292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.799449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.799476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.799625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.799668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.799859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.799885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.800094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.800127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.800282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.413 [2024-07-25 19:18:26.800308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.413 qpair failed and we were unable to recover it. 00:27:34.413 [2024-07-25 19:18:26.800450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.800478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.800695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.800724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.800881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.800910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.801182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.801209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.801362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.801406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.801595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.801625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.801824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.801851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.802019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.802050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.802253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.802280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.802446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.802473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.802694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.802722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.802947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.802973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.803126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.803154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.803326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.803353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.803581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.803610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.803805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.803831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.804008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.804037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.804185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.804215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.804393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.804420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.804588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.804618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.804780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.804810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.805034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.805060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.805307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.805333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.805532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.805559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.805744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.805775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.805933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.805959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.806132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.806158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.806310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.806337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.806485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.806511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.806655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.806682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.806849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.806875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.807097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.807133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.807322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.807348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.807524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.807551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.807745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.807774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.807949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.807975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.808120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.414 [2024-07-25 19:18:26.808147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.414 qpair failed and we were unable to recover it. 00:27:34.414 [2024-07-25 19:18:26.808345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.808371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.808574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.808604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.808825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.808851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.809070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.809099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.809310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.809336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.809535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.809561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.809752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.809781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.809981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.810007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.810169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.810196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.810389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.810417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.810607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.810635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.810823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.810850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.811016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.811043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.811252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.811279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.811427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.811453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.811639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.811668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.811857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.811886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.812085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.812116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.812262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.812288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.812504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.812533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.812752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.812778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.812945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.812974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.813175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.813202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.813370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.813396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.813589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.813618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.813817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.813843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.813991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.814018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.814190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.814220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.814408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.814438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.814633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.814659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.814831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.814860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.815028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.815058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.815257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.815283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.815476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.815505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.815683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.815709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.415 [2024-07-25 19:18:26.815855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.415 [2024-07-25 19:18:26.815881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.415 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.816050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.816079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.816254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.816280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.816429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.816456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.816634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.816664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.816839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.816865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.817043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.817070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.817255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.817285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.817483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.817512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.817708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.817734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.817913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.817939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.818098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.818133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.818322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.818348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.818538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.818567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.818764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.818794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.818968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.818994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.819176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.819204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.819388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.819418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.819579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.819606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.819831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.819865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.820076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.820111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.820304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.820331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.820502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.820532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.820736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.820762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.820952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.820978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.821199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.821228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.821463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.821489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.821657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.821684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.821847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.821876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.822091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.822126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.822353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.822380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.822524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.822566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.822748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.822777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.822977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.823004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.823174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.823204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.823391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.823421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.823604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.823630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.416 qpair failed and we were unable to recover it. 00:27:34.416 [2024-07-25 19:18:26.823802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.416 [2024-07-25 19:18:26.823828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.417 qpair failed and we were unable to recover it. 00:27:34.417 [2024-07-25 19:18:26.824043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.417 [2024-07-25 19:18:26.824072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.417 qpair failed and we were unable to recover it. 00:27:34.417 [2024-07-25 19:18:26.824296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.417 [2024-07-25 19:18:26.824322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.417 qpair failed and we were unable to recover it. 00:27:34.417 [2024-07-25 19:18:26.824508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.417 [2024-07-25 19:18:26.824534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.417 qpair failed and we were unable to recover it. 00:27:34.417 [2024-07-25 19:18:26.824683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.417 [2024-07-25 19:18:26.824710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.417 qpair failed and we were unable to recover it. 00:27:34.417 [2024-07-25 19:18:26.824886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.417 [2024-07-25 19:18:26.824925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.417 qpair failed and we were unable to recover it. 00:27:34.417 [2024-07-25 19:18:26.825111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.417 [2024-07-25 19:18:26.825142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.417 qpair failed and we were unable to recover it. 00:27:34.417 [2024-07-25 19:18:26.825329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.417 [2024-07-25 19:18:26.825359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.417 qpair failed and we were unable to recover it. 00:27:34.417 [2024-07-25 19:18:26.825556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.417 [2024-07-25 19:18:26.825582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.417 qpair failed and we were unable to recover it. 00:27:34.417 [2024-07-25 19:18:26.825780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.417 [2024-07-25 19:18:26.825810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.417 qpair failed and we were unable to recover it. 00:27:34.417 [2024-07-25 19:18:26.826007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.417 [2024-07-25 19:18:26.826033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.417 qpair failed and we were unable to recover it. 00:27:34.417 [2024-07-25 19:18:26.826226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.417 [2024-07-25 19:18:26.826253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.417 qpair failed and we were unable to recover it. 00:27:34.417 [2024-07-25 19:18:26.826425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.417 [2024-07-25 19:18:26.826452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.417 qpair failed and we were unable to recover it. 00:27:34.417 [2024-07-25 19:18:26.826632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.417 [2024-07-25 19:18:26.826671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.417 qpair failed and we were unable to recover it. 00:27:34.417 [2024-07-25 19:18:26.826859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.417 [2024-07-25 19:18:26.826886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.417 qpair failed and we were unable to recover it. 00:27:34.417 [2024-07-25 19:18:26.827070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.417 [2024-07-25 19:18:26.827097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.417 qpair failed and we were unable to recover it. 00:27:34.417 [2024-07-25 19:18:26.827282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.417 [2024-07-25 19:18:26.827308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.417 qpair failed and we were unable to recover it. 00:27:34.417 [2024-07-25 19:18:26.827496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.417 [2024-07-25 19:18:26.827524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.417 qpair failed and we were unable to recover it. 00:27:34.417 [2024-07-25 19:18:26.827718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.417 [2024-07-25 19:18:26.827747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.417 qpair failed and we were unable to recover it. 00:27:34.417 [2024-07-25 19:18:26.827959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.417 [2024-07-25 19:18:26.827988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.417 qpair failed and we were unable to recover it. 00:27:34.417 [2024-07-25 19:18:26.828203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.417 [2024-07-25 19:18:26.828231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.417 qpair failed and we were unable to recover it. 00:27:34.417 [2024-07-25 19:18:26.828423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.417 [2024-07-25 19:18:26.828454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.417 qpair failed and we were unable to recover it. 00:27:34.417 [2024-07-25 19:18:26.828660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.417 [2024-07-25 19:18:26.828696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.417 qpair failed and we were unable to recover it. 00:27:34.417 [2024-07-25 19:18:26.828889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.417 [2024-07-25 19:18:26.828917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.701 qpair failed and we were unable to recover it. 00:27:34.701 [2024-07-25 19:18:26.829137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.701 [2024-07-25 19:18:26.829166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.701 qpair failed and we were unable to recover it. 00:27:34.701 [2024-07-25 19:18:26.829431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.701 [2024-07-25 19:18:26.829460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.701 qpair failed and we were unable to recover it. 00:27:34.701 [2024-07-25 19:18:26.829644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.701 [2024-07-25 19:18:26.829670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.701 qpair failed and we were unable to recover it. 00:27:34.701 [2024-07-25 19:18:26.829823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.701 [2024-07-25 19:18:26.829849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.701 qpair failed and we were unable to recover it. 00:27:34.701 [2024-07-25 19:18:26.830029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.701 [2024-07-25 19:18:26.830069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.701 qpair failed and we were unable to recover it. 00:27:34.701 [2024-07-25 19:18:26.830285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.701 [2024-07-25 19:18:26.830312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.701 qpair failed and we were unable to recover it. 00:27:34.701 [2024-07-25 19:18:26.830477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.701 [2024-07-25 19:18:26.830508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.701 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.830710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.830744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.830897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.830923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.831080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.831113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.831272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.831299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.831474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.831502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.831666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.831695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.831872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.831898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.832080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.832117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.832336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.832364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.832574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.832610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.832774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.832801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.832997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.833028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.833239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.833269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.833443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.833471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.833619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.833645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.833817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.833847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.834060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.834086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.834239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.834266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.834461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.834490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.834682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.834709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.834907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.834935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.835097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.835147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.835405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.835431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.835603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.835632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.835790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.835820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.836010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.836036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.836231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.836262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.836454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.836481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.836679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.836705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.836902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.836931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.837093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.837130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.837348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.837379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.837577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.837606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.837774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.837803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.837990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.838017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.838161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.702 [2024-07-25 19:18:26.838188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.702 qpair failed and we were unable to recover it. 00:27:34.702 [2024-07-25 19:18:26.838338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.838365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.838573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.838601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.838777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.838804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.839024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.839053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.839229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.839257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.839432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.839460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.839615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.839644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.839835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.839862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.840046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.840076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.840295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.840322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.840489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.840515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.840708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.840738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.840908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.840938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.841211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.841239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.841433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.841462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.841649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.841678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.841880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.841906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.842055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.842081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.842241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.842267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.842415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.842441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.842600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.842629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.842818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.842848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.843055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.843083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.843294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.843324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.843508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.843537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.843727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.843753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.843970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.843999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.844198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.844225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.844396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.844423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.844618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.844647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.844873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.844899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.845074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.845105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.845310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.845339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.845536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.845565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.703 [2024-07-25 19:18:26.845794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.703 [2024-07-25 19:18:26.845820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.703 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.845994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.846030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.846216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.846246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.846433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.846461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.846647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.846676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.846870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.846896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.847144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.847187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.847362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.847389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.847564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.847590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.847762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.847788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.847968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.847996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.848169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.848199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.848362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.848389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.848583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.848612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.848803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.848833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.849055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.849082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.849286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.849315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.849502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.849531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.849761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.849787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.849977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.850006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.850217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.850246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.850440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.850467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.850655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.850684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.850900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.850929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.851125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.851153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.851354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.851384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.851575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.851602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.851799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.851826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.852029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.852060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.852266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.852293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.852465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.852493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.852645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.852671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.852835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.852861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.853028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.853055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.853200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.704 [2024-07-25 19:18:26.853227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.704 qpair failed and we were unable to recover it. 00:27:34.704 [2024-07-25 19:18:26.853423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.853452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.853624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.853650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.853828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.853854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.854066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.854096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.854317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.854343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.854531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.854559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.854719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.854753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.854944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.854974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.855145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.855172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.855385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.855414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.855630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.855659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.855861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.855887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.856084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.856117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.856323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.856352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.856562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.856591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.856813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.856844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.857035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.857062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.857265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.857295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.857488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.857517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.857704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.857734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.857937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.857963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.858187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.858214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.858426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.858455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.858696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.858746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.858959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.858985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.859182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.859212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.859428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.859454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.859779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.859847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.860063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.860090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.860298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.860327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.860552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.860580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.860760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.860789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.861049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.861076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.861297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.861324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.861541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.861570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.705 [2024-07-25 19:18:26.861835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-25 19:18:26.861864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.705 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.862053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.862079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.862238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.862266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.862424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.862450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.862597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.862640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.862802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.862829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.863028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.863054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.863286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.863313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.863498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.863524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.863692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.863718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.863917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.863946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.864149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.864180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.864355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.864382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.864588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.864614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.864835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.864865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.865079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.865113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.865283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.865311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.865529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.865556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.865753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.865782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.865943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.865973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.866176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.866203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.866412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.866438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.866609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.866639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.866829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.866858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.867038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.867067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.867302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.867328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.867569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.867595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.867816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.867845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.868033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.868063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.706 [2024-07-25 19:18:26.868327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-25 19:18:26.868353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.706 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.868531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.868557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.868727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.868772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.868997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.869025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.869194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.869220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.869377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.869403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.869620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.869648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.869836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.869865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.870054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.870080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.870236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.870263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.870460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.870489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.870684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.870711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.870881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.870907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.871051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.871078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.871245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.871272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.871438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.871465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.871632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.871658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.871872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.871900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.872117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.872160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.872307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.872334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.872512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.872539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.872688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.872715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.872883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.872913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.873112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.873141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.873355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.873381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.873597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.873626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.873824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.873850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.874058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.874085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.874285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.874311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.874571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.874600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.874785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.874815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.874978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.875009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.875201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.875227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.875418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.875448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.875670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.875697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.875917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.875946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.876153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.876179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.707 [2024-07-25 19:18:26.876395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-25 19:18:26.876424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.707 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.876623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.876649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.876842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.876872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.877077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.877107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.877274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.877304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.877467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.877497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.877815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.877869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.878082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.878113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.878267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.878297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.878480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.878507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.878735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.878764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.878959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.878988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.879195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.879222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.879478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.879504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.879862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.879920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.880115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.880150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.880346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.880376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.880572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.880600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.880886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.880945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.881139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.881166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.881361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.881387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.881589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.881618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.881933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.881994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.882212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.882239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.882459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.882488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.882685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.882718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.883092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.883164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.883343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.883369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.883554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.883585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.883802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.883831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.884025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.884054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.884236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.884262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.884480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.884509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.884701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.884730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.708 [2024-07-25 19:18:26.884922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.708 [2024-07-25 19:18:26.884951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.708 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.885144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.885171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.885384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.885413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.885613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.885639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.885836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.885862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.886080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.886113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.886295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.886323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.886542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.886571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.886846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.886872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.887067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.887093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.887279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.887309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.887524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.887553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.887720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.887750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.887950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.887976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.888182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.888213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.888412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.888442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.888667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.888693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.888949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.888975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.889163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.889193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.889409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.889439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.889668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.889703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.889866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.889893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.890073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.890108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.890332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.890366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.890696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.890745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.890918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.890945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.891147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.891177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.891362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.891391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.891587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.891615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.891786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.891812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.892011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.892039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.892221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.892254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.892453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.892513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.892773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.892799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.892983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.893010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.893197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.893227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.893448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.709 [2024-07-25 19:18:26.893475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.709 qpair failed and we were unable to recover it. 00:27:34.709 [2024-07-25 19:18:26.893644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.893671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.893810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.893836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.894031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.894058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.894204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.894232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.894399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.894425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.894621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.894651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.894864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.894891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.895058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.895087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.895353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.895379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.895603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.895629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.895769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.895795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.895960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.896003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.896194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.896221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.896391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.896422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.896637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.896666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.896858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.896887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.897084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.897115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.897312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.897342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.897568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.897597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.897821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.897874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.898094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.898125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.898282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.898308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.898501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.898532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.898822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.898851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.899032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.899061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.899246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.899273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.899443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.899473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.899813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.899862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.900081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.900117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.900339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.900368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.900564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.710 [2024-07-25 19:18:26.900590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.710 qpair failed and we were unable to recover it. 00:27:34.710 [2024-07-25 19:18:26.900782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.900811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.901008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.901034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.901296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.901326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.901490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.901524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.901819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.901883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.902109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.902135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.902308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.902334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.902529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.902558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.902720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.902751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.902947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.902974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.903173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.903200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.903405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.903435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.903751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.903816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.904038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.904063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.904278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.904305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.904478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.904505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.904806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.904856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.905049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.905076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.905307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.905337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.905506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.905535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.905748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.905777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.905979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.906006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.906206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.906233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.906428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.906456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.906678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.906704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.906900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.906926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.907121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.907148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.907345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.711 [2024-07-25 19:18:26.907374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.711 qpair failed and we were unable to recover it. 00:27:34.711 [2024-07-25 19:18:26.907534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.907564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.907759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.907786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.907981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.908010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.908277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.908307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.908532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.908561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.908779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.908805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.908996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.909025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.909211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.909241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.909424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.909454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.909673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.909699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.909849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.909875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.910048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.910076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.910308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.910337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.910507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.910534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.910726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.910755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.910926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.910960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.911149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.911179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.911346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.911386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.911611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.911640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.911826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.911855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.912069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.912098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.912329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.912355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.912576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.912602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.912802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.912828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.913034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.913063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.913245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.913271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.913441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.913467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.913667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.913696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.913882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.913912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.914168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.914196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.914368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.914412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.914628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.914654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.914851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.914877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.915052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.915079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.712 [2024-07-25 19:18:26.915277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.712 [2024-07-25 19:18:26.915304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.712 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.915544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.915573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.915824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.915875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.916108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.916134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.916325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.916351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.916548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.916577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.916766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.916795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.916982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.917008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.917191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.917218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.917410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.917440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.917778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.917835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.918093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.918128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.918339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.918365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.918570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.918599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.918786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.918815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.919015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.919040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.919264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.919294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.919493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.919521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.919900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.919952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.920165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.920192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.920381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.920410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.920598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.920632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.920850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.920876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.921045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.921071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.921220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.921247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.921442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.921471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.921722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.921774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.921968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.921995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.922188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.922218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.922383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.922413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.922591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.922617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.922792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.922818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.923041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.923071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.923244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.923271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.923494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.923545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.923767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.923794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.923999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.924028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.713 qpair failed and we were unable to recover it. 00:27:34.713 [2024-07-25 19:18:26.924240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.713 [2024-07-25 19:18:26.924270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.924438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.924467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.924664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.924690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.924886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.924914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.925138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.925167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.925380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.925408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.925603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.925629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.925818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.925846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.926056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.926085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.926293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.926322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.926511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.926537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.926739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.926768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.926955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.926984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.927202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.927232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.927427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.927454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.927600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.927626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.927799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.927825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.927972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.927998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.928170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.928198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.928419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.928448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.928655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.928684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.928907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.928933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.929111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.929138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.929357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.929386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.929616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.929643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.929814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.929840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.930013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.930039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.930212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.930239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.930430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.930459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.930762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.930819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.931012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.931037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.931189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.931216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.931396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.931422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.931610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.931640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.931809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.931835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.931997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.714 [2024-07-25 19:18:26.932040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.714 qpair failed and we were unable to recover it. 00:27:34.714 [2024-07-25 19:18:26.932256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.932286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.932445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.932474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.932674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.932702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.932921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.932949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.933141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.933171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.933359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.933389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.933583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.933609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.933765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.933794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.933998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.934025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.934201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.934227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.934366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.934393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.934613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.934642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.934837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.934866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.935026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.935056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.935231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.935258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.935453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.935488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.935708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.935734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.935886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.935913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.936116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.936145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.936367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.936409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.936595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.936625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.936854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.936880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.937054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.937080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.937238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.937265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.937435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.937461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.937607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.937635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.937833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.937859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.938072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.938099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.938277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.938304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.938697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.938749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.938944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.938972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.939147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.939175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.939327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.939354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.939545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.939575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.939795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.939821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.715 [2024-07-25 19:18:26.939995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.715 [2024-07-25 19:18:26.940023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.715 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.940240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.940270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.940471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.940497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.940693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.940719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.940917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.940946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.941132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.941161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.941322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.941351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.941545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.941571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.941768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.941797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.941956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.941986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.942157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.942184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.942383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.942409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.942594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.942622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.942787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.942814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.943020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.943048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.943243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.943269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.943444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.943470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.943697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.943726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.943949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.943978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.944232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.944259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.944412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.944442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.944628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.944657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.944825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.944854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.945018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.945044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.945220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.945247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.945445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.945474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.945785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.945850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.946060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.946086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.716 qpair failed and we were unable to recover it. 00:27:34.716 [2024-07-25 19:18:26.946289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.716 [2024-07-25 19:18:26.946316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.946528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.946557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.946787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.946814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.946988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.947014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.947202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.947229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.947445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.947474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.947703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.947729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.947881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.947907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.948059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.948085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.948263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.948290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.948549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.948578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.948784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.948811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.948993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.949019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.949196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.949223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.949432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.949459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.949629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.949656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.949845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.949872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.950062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.950092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.950274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.950300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.950478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.950505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.950722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.950751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.950912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.950941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.951142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.951171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.951386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.951412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.951602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.951632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.951794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.951823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.952009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.952039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.952212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.952239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.952423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.952452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.952646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.952675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.952843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.952872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.953069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.953095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.953297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.953332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.953523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.953552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.953866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.953927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.954146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.717 [2024-07-25 19:18:26.954173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.717 qpair failed and we were unable to recover it. 00:27:34.717 [2024-07-25 19:18:26.954332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.954358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.954583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.954612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.954927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.954994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.955216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.955243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.955440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.955469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.955658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.955687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.955883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.955938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.956158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.956185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.956365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.956394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.956578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.956607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.956862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.956912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.957138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.957164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.957359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.957388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.957582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.957611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.957867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.957896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.958081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.958113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.958268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.958295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.958496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.958522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.958825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.958854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.959054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.959080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.959296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.959325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.959515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.959544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.959707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.959736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.959913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.959940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.960162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.960193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.960412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.960441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.960723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.960752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.960953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.960979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.961172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.961201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.961418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.961447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.961770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.961832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.962023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.962050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.962202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.962229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.718 [2024-07-25 19:18:26.962379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.718 [2024-07-25 19:18:26.962422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.718 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.962768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.962828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.963030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.963057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.963208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.963239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.963420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.963447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.963702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.963756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.963981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.964010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.964186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.964213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.964402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.964432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.964721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.964786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.964995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.965021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.965251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.965280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.965487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.965512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.965659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.965686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.965880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.965906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.966150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.966176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.966352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.966378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.966615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.966641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.966843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.966869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.967065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.967095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.967307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.967336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.967528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.967558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.967732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.967759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.967952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.967983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.968167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.968194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.968411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.968440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.968606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.968633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.968855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.968884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.969075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.969109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.969304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.969332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.969559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.969586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.969797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.969827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.970015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.970043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.970229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.970259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.970478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.970505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.970709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.970738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.970948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.719 [2024-07-25 19:18:26.970977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.719 qpair failed and we were unable to recover it. 00:27:34.719 [2024-07-25 19:18:26.971166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.971197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.971418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.971444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.971683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.971709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.971911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.971938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.972079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.972110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.972304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.972331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.972504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.972535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.972700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.972726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.972945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.972974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.973153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.973180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.973404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.973433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.973621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.973651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.973834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.973865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.974060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.974087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.974248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.974274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.974498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.974527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.974798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.974849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.975066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.975091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.975305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.975334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.975506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.975532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.975794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.975844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.976038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.976064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.976248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.976275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.976463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.976492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.976820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.976880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.977094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.977125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.977299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.977327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.977543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.977572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.977785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.977814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.977974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.978001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.978180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.978207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.978349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.978376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.978555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.978584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.978783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.978810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.978966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.978993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.720 qpair failed and we were unable to recover it. 00:27:34.720 [2024-07-25 19:18:26.979189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.720 [2024-07-25 19:18:26.979220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.979491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.979544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.979764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.979790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.979990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.980019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.980176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.980207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.980375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.980405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.980570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.980596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.980751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.980777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.980992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.981021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.981287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.981317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.981511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.981537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.981729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.981763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.981946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.981975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.982293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.982347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.982569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.982595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.982796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.982825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.983020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.983046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.983240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.983269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.983451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.983477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.983697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.983726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.983972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.984001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.984205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.984241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.984387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.984414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.984610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.984639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.984854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.984883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.985108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.985138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.985310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.985337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.985528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.985557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.985715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.721 [2024-07-25 19:18:26.985744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.721 qpair failed and we were unable to recover it. 00:27:34.721 [2024-07-25 19:18:26.985958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.985987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.986189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.986215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.986391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.986418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.986563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.986589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.986786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.986812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.987027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.987053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.987230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.987257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.987499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.987525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.987728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.987754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.987933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.987960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.988125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.988155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.988358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.988384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.988559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.988589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.988783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.988809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.988999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.989027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.989226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.989253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.989476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.989505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.989678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.989705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.989895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.989924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.990116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.990145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.990311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.990341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.990500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.990528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.990748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.990782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.990979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.991007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.991229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.991255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.991455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.991481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.991689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.991715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.991887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.991913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.992112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.992142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.992331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.992357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.992576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.992605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.992795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.992825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.993010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.993039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.993217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.993243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.993424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.993450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.993600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.993626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.993821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.722 [2024-07-25 19:18:26.993850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.722 qpair failed and we were unable to recover it. 00:27:34.722 [2024-07-25 19:18:26.994036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.994063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:26.994252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.994280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:26.994427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.994453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:26.994631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.994661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:26.994885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.994911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:26.995058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.995085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:26.995261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.995305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:26.995462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.995491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:26.995710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.995736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:26.995952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.995981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:26.996188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.996214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:26.996388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.996414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:26.996615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.996641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:26.996838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.996867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:26.997080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.997113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:26.997484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.997535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:26.997723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.997749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:26.997968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.997997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:26.998211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.998237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:26.998383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.998428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:26.998598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.998626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:26.998814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.998843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:26.999059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.999088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:26.999296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.999324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:26.999505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.999531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:26.999694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.999728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:26.999885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:26.999915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:27.000092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:27.000127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:27.000325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:27.000352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:27.000541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:27.000570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:27.000765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:27.000794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.723 [2024-07-25 19:18:27.001022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.723 [2024-07-25 19:18:27.001048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.723 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.001222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.001248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.001422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.001448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.001661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.001687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.001847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.001877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.002071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.002098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.002274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.002301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.002498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.002524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.002691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.002717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.002888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.002914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.003121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.003151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.003345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.003374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.003598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.003627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.003829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.003856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.004047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.004075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.004283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.004310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.004509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.004538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.004721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.004747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.004920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.004947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.005167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.005197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.005353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.005382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.005564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.005590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.005806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.005835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.006076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.006111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.006323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.006349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.006555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.006581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.006784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.006813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.007022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.007051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.007212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.007242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.007439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.007466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.007642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.007668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.007833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.007864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.008048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.008078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.008276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.008303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.008523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.008557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.724 qpair failed and we were unable to recover it. 00:27:34.724 [2024-07-25 19:18:27.008749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.724 [2024-07-25 19:18:27.008778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.008939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.008968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.009131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.009158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.009312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.009338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.009538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.009565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.009836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.009889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.010113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.010140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.010318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.010345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.010569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.010598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.010808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.010838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.011036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.011063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.011220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.011246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.011437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.011464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.011767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.011825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.012002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.012029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.012228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.012258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.012449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.012478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.012796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.012857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.013030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.013056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.013224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.013251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.013426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.013452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.013622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.013651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.013871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.013897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.014111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.014138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.014310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.014337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.014533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.014560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.014707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.014733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.014872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.014898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.015079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.015111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.015267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.015294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.015474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.015501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.015678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.015704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.015882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.015908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.016083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.016117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.016276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.016303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.016448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.725 [2024-07-25 19:18:27.016474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.725 qpair failed and we were unable to recover it. 00:27:34.725 [2024-07-25 19:18:27.016672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.016698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.016870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.016897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.017047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.017074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.017237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.017268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.017416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.017442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.017587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.017613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.017794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.017820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.018019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.018048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.018246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.018273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.018460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.018494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.018732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.018758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.018930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.018956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.019094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.019125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.019300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.019326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.019479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.019506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.019654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.019680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.019841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.019867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.020046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.020073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.020247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.020274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.020423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.020449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.020595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.020621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.020794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.020820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.020989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.021016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.021223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.021249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.021395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.021438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.021606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.021635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.021805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.021832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.021983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.726 [2024-07-25 19:18:27.022010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.726 qpair failed and we were unable to recover it. 00:27:34.726 [2024-07-25 19:18:27.022163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.022190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.022339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.022365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.022563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.022590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.022745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.022771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.022969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.022996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.023168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.023195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.023366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.023392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.023561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.023587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.023756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.023783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.023943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.023970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.024184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.024211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.024358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.024384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.024577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.024603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.024799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.024826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.025018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.025045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.025192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.025222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.025369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.025395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.025547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.025575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.025752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.025778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.025919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.025946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.026105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.026133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.026292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.026319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.026472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.026499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.026694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.026724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.026923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.026950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.027114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.027141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.027295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.027322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.027472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.027498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.027651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.027679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.027858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.027885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.028028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.028054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.028202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.028229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.028397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.028424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.028666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.028714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.028906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.028932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.029121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.029151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.727 [2024-07-25 19:18:27.029323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.727 [2024-07-25 19:18:27.029350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.727 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.029559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.029608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.029804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.029831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.029996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.030022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.030181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.030207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.030359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.030385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.030563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.030613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.030845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.030905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.031109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.031142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.031308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.031336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.031513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.031540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.031739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.031783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.031956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.031999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.032188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.032216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.032433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.032481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.032677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.032721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.032903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.032948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.033152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.033181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.033343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.033382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.033593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.033631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.033843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.033873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.034038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.034069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.034257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.034284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.034454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.034484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.034689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.034718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.034940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.034971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.035163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.035190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.035364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.035391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.035541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.035567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.035763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.035789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.036018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.036047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.036244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.036271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.036445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.036472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.036677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.036719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.036970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.728 [2024-07-25 19:18:27.036999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.728 qpair failed and we were unable to recover it. 00:27:34.728 [2024-07-25 19:18:27.037215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.037242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.037414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.037445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.037637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.037666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.037982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.038028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.038219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.038247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.038397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.038424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.038630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.038657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.038873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.038902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.039111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.039138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.039325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.039352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.039549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.039575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.039752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.039809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.039972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.040003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.040183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.040212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.040395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.040439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.040634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.040678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.040879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.040930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.041123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.041165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.041370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.041418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.041630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.041677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.041889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.041933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.042152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.042192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.042407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.042455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.042728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.042777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.042960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.042992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.043177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.043228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.043451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.043498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.043745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.043789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.043966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.043993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.044188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.044234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.044439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.044494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.044691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.044746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.044955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.044994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.045224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.045278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.045551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.045607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.045864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-07-25 19:18:27.045922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-07-25 19:18:27.046125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.046162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.046382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.046436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.046654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.046708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.046882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.046921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.047100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.047153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.047376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.047434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.047692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.047746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.048031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.048068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.048308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.048361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.048563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.048620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.048830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.048887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.049081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.049120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.049310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.049357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.049611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.049664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.049938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.049988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.050190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.050241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.050414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.050460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.050687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.050733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.050930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.050967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.051166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.051214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.051408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.051455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.051690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.051738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.051920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.051947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.052089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.052126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.052894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.052926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.053134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.053164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.053348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.053380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.053612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.053642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.053838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.053884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.054045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.054079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.054261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.054290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.054459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.054490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.054705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.054749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.054958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.054985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.055188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-07-25 19:18:27.055241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-07-25 19:18:27.055421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.055468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.055662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.055707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.055909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.055954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.056118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.056147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.056317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.056363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.056581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.056610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.056797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.056824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.056984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.057016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.057196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.057243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.057417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.057461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.057662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.057695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.057893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.057920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.058109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.058138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.058328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.058377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.058580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.058625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.058826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.058853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.058999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.059027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.059237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.059282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.059465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.059514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.059746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.059790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.059954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.059985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.060181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.060229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.060420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.060466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.060697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.060725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.060910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.060942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.061120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.061149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.061317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.061362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.061553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.061599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.061837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.061883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.062070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.062098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-07-25 19:18:27.062317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-07-25 19:18:27.062362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.062596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.062642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.062837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.062887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.063070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.063113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.063281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.063309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.063499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.063547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.063758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.063805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.064014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.064041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.064204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.064232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.064421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.064474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.064704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.064749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.064913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.064947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.065126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.065171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.065393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.065444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.065640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.065686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.065858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.065886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.066054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.066080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.066281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.066315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.066539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.066584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.066760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.066805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.066957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.066989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.067195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.067242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.067444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.067488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.067631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.067658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.067846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.067874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.068054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.068081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.068267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.068313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.068496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.068523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.068675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.068701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.068898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.068928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.069111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.069144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.069304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.069333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.069487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.069514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.069691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.069725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.069879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.069905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-07-25 19:18:27.070083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-07-25 19:18:27.070119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.070270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.070301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.070482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.070507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.070656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.070683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.070886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.070918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.071070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.071096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.071259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.071285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.071463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.071490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.071661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.071687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.071864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.071891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.072036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.072062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.072261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.072288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.072439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.072466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.072640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.072665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.072833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.072859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.073008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.073037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.073200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.073227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.073409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.073435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.073635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.073686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.073843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.073869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.074044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.074077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.074240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.074268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.074473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.074501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.074671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.074698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.074855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.074880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.075029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.075056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.075217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.075244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.075396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.075422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.075607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.075634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.075791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.075818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.075974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.076000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.076186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.076218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.076480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.076526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.076709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.076735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.076915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-07-25 19:18:27.076942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-07-25 19:18:27.077143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.077180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.077355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.077382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.077576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.077603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.077810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.077837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.078035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.078065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.078230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.078258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.078433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.078460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.078616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.078642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.078807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.078833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.079031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.079058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.079214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.079241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.079426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.079456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.079663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.079707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.079878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.079904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.080055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.080081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.080243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.080269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.080436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.080480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.080708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.080754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.080931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.080957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.081139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.081166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.081318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.081345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.081528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.081555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.081730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.081757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.081909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.081936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.082081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.082115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.082293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.082321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.082501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.082545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.082739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.082785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.082951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.082978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.083155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.083182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.083326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.083352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.083521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.083547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.083710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.083736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.083880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.083906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.084051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.084078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.084258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.084286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.084516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-07-25 19:18:27.084558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-07-25 19:18:27.084761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.084807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.084989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.085015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.085210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.085238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.085399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.085432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.085667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.085712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.085908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.085935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.086108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.086135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.086306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.086332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.086551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.086597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.086828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.086871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.087068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.087094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.087294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.087321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.087489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.087515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.087658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.087684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.087829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.087856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.088037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.088063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.088237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.088265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.088419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.088445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.088617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.088643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.088812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.088838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.089007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.089033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.089244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.089272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.089413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.089441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.089637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.089680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.089831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.089857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.090028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.090055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.090211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.090238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.090382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.090407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.090598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.090629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.090823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.090849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.090992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.091018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.091184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.091211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.091426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.091472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.091734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.091778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.091951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-07-25 19:18:27.091978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-07-25 19:18:27.092159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.092187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.092387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.092413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.092597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.092623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.092801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.092827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.092980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.093008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.093203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.093231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.093400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.093427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.093577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.093603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.093774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.093806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.093985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.094011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.094201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.094227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.094397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.094423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.094568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.094593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.094785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.094811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.094984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.095010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.095188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.095215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.095405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.095434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.095638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.095683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.095862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.095888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.096056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.096081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.096310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.096339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.096531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.096575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.096767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.096810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.096961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.096987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.097145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.097171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.097354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.097381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.097558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.097600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.097851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.097896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.098073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.736 [2024-07-25 19:18:27.098100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.736 qpair failed and we were unable to recover it. 00:27:34.736 [2024-07-25 19:18:27.098282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.098307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.098502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.098546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.098779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.098824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.098973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.098999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.099149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.099175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.099347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.099373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.099549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.099575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.099740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.099766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.099914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.099940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.100091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.100123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.100269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.100295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.100443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.100469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.100668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.100694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.100844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.100870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.101069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.101094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.101245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.101270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.101442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.101488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.101709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.101753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.101925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.101951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.102121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.102152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.102304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.102329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.102548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.102590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.102797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.102841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.102993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.103019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.103169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.103196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.103350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.103377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.103572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.103617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.103810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.103836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.104011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.104037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.104189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.104216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.104392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.104417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.104585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.104629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.104786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.104812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.104991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.105017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.105247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.737 [2024-07-25 19:18:27.105292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.737 qpair failed and we were unable to recover it. 00:27:34.737 [2024-07-25 19:18:27.105493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.105536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.105721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.105768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.105936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.105962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.106128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.106154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.106368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.106394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.106565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.106593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.106792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.106818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.106991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.107016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.107190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.107235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.107435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.107463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.107670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.107712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.107863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.107889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.108085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.108117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.108352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.108381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.108617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.108661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.108868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.108914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.109089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.109120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.109294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.109320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.109492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.109535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.109698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.109742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.109893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.109919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.110092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.110124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.110322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.110365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.110521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.110564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.110736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.110784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.110954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.110980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.111176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.111220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.111392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.111420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.111629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.111671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.111915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.111941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.112145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.112171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.112336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.112382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.738 [2024-07-25 19:18:27.112575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.738 [2024-07-25 19:18:27.112620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.738 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.112841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.112884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.113056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.113082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.113290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.113334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.113528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.113557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.113755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.113802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.113978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.114004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.114201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.114248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.114449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.114491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.114671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.114719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.114892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.114917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.115070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.115094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.115277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.115303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.115499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.115543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.115740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.115769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.115955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.115980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.116169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.116199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.116440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.116484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.116680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.116709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.116903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.116930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.117125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.117152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.117303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.117328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.117484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.117510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.117732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.117775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.117951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.117978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.118178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.118222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.118393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.118439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.118665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.118712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.118912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.118937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.119135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.119160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.119358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.119401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.119627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.119670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.119845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.119876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.120052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.120078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.120277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.739 [2024-07-25 19:18:27.120322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.739 qpair failed and we were unable to recover it. 00:27:34.739 [2024-07-25 19:18:27.120529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.120556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.120726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.120751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.120922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.120947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.121098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.121132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.121304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.121347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.121546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.121588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.121787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.121813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.121960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.121986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.122159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.122185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.122334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.122360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.122571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.122616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.122787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.122814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.122987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.123013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.123189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.123234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.123433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.123461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.123669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.123699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.123856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.123882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.124078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.124109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.124296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.124339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.124547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.124573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.124748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.124774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.124946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.124971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.125145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.125172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.125370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.125419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.125623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.125665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.125839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.125866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.126012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.126038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.126212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.126238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.126415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.126441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.126636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.126681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.126858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.126884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.127033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.127060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.127234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.127278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.127454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.127496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.127714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.127757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.740 qpair failed and we were unable to recover it. 00:27:34.740 [2024-07-25 19:18:27.127902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.740 [2024-07-25 19:18:27.127929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.128100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.128131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.128307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.128337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.128503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.128528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.128756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.128799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.128999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.129025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.129248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.129293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.129478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.129525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.129734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.129760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.129955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.129981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.130173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.130217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.130463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.130508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.130730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.130757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.130908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.130934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.131111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.131137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.131332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.131377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.131580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.131624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.131826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.131868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.132009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.132035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.132192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.132218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.132393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.132417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.132575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.132604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.132849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.132893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.133094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.133132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.133290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.133316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.133492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.133535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.133724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.133766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.133938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.133964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.134135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.134162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.741 [2024-07-25 19:18:27.134387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.741 [2024-07-25 19:18:27.134431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.741 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.134658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.134706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.134880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.134905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.135046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.135071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.135264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.135308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.135477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.135506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.135742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.135785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.135927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.135952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.136124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.136149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.136320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.136364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.136586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.136630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.136773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.136800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.136955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.136980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.137126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.137158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.137338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.137382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.137543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.137586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.137780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.137805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.137997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.138023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.138224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.138268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.138439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.138484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.138679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.138707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.138894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.138919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.139091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.139122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.139290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.139332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.139525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.139554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.139757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.139801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.139945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.139971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.140170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.140215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.140400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.140426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.140576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.140602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.140821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.140864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.141020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.141045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.141262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.141305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.141464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.141490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.141704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.141733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.141946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.141972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.742 [2024-07-25 19:18:27.142128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.742 [2024-07-25 19:18:27.142156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.742 qpair failed and we were unable to recover it. 00:27:34.743 [2024-07-25 19:18:27.142327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.743 [2024-07-25 19:18:27.142353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.743 qpair failed and we were unable to recover it. 00:27:34.743 [2024-07-25 19:18:27.142528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.743 [2024-07-25 19:18:27.142553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.743 qpair failed and we were unable to recover it. 00:27:34.743 [2024-07-25 19:18:27.142748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.743 [2024-07-25 19:18:27.142777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.743 qpair failed and we were unable to recover it. 00:27:34.743 [2024-07-25 19:18:27.142944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.743 [2024-07-25 19:18:27.142971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.743 qpair failed and we were unable to recover it. 00:27:34.743 [2024-07-25 19:18:27.143163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.743 [2024-07-25 19:18:27.143208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.743 qpair failed and we were unable to recover it. 00:27:34.743 [2024-07-25 19:18:27.143377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.743 [2024-07-25 19:18:27.143403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.743 qpair failed and we were unable to recover it. 00:27:34.743 [2024-07-25 19:18:27.143555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.743 [2024-07-25 19:18:27.143580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.743 qpair failed and we were unable to recover it. 00:27:34.743 [2024-07-25 19:18:27.143753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.743 [2024-07-25 19:18:27.143779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.743 qpair failed and we were unable to recover it. 00:27:34.743 [2024-07-25 19:18:27.143928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.743 [2024-07-25 19:18:27.143955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.743 qpair failed and we were unable to recover it. 00:27:34.743 [2024-07-25 19:18:27.144148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.743 [2024-07-25 19:18:27.144193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.743 qpair failed and we were unable to recover it. 00:27:34.743 [2024-07-25 19:18:27.144365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.743 [2024-07-25 19:18:27.144409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.743 qpair failed and we were unable to recover it. 00:27:34.743 [2024-07-25 19:18:27.144608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.743 [2024-07-25 19:18:27.144650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.743 qpair failed and we were unable to recover it. 00:27:34.743 [2024-07-25 19:18:27.144823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.743 [2024-07-25 19:18:27.144849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.743 qpair failed and we were unable to recover it. 00:27:34.743 [2024-07-25 19:18:27.145010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.743 [2024-07-25 19:18:27.145036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.743 qpair failed and we were unable to recover it. 00:27:34.743 [2024-07-25 19:18:27.145188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.743 [2024-07-25 19:18:27.145214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.743 qpair failed and we were unable to recover it. 00:27:34.743 [2024-07-25 19:18:27.145386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.743 [2024-07-25 19:18:27.145411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.743 qpair failed and we were unable to recover it. 00:27:34.743 [2024-07-25 19:18:27.145552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.743 [2024-07-25 19:18:27.145581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.743 qpair failed and we were unable to recover it. 00:27:34.743 [2024-07-25 19:18:27.145740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.743 [2024-07-25 19:18:27.145767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.743 qpair failed and we were unable to recover it. 00:27:34.743 [2024-07-25 19:18:27.145939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.743 [2024-07-25 19:18:27.145964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.743 qpair failed and we were unable to recover it. 00:27:34.743 [2024-07-25 19:18:27.146159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.743 [2024-07-25 19:18:27.146188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.743 qpair failed and we were unable to recover it. 00:27:34.743 [2024-07-25 19:18:27.146406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.743 [2024-07-25 19:18:27.146447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.743 qpair failed and we were unable to recover it. 00:27:34.743 [2024-07-25 19:18:27.146646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.743 [2024-07-25 19:18:27.146691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:34.743 qpair failed and we were unable to recover it. 00:27:34.743 [2024-07-25 19:18:27.146868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.146894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.147092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.147124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.147320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.147363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.147606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.147638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.147855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.147898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.148072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.148098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.148304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.148348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.148549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.148597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.148764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.148808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.148954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.148979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.149206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.149250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.149481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.149526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.149747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.149790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.149965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.149992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.150164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.150191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.150365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.150391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.150576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.150620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.150796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.150821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.150991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.151016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.151245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.151271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.151424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.151449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.151624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.151650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.151796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.151821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.152016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.152042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.152238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.152283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.152439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.152465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.152636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.152661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.152860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.152885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.153033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.153059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.153209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-07-25 19:18:27.153236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-07-25 19:18:27.153370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.153395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.153544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.153569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.153782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.153810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.153999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.154025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.154178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.154209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.154379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.154404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.154619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.154660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.154809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.154836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.155008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.155035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.155207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.155251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.155453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.155482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.155668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.155710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.155888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.155913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.156058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.156083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.156269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.156296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.156472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.156498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.156664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.156706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.156880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.156905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.157117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.157144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.157351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.157380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.157601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.157646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.157858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.157901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.158081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.158115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.158326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.158354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.158515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.158540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.158713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.158738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.158884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.158911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.159057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.159083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.159284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.159328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.159553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.159610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.159805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.159848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.160018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.160048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.160225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.160252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.160437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.160466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.160670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.160713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-07-25 19:18:27.160888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-07-25 19:18:27.160913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.161112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.161138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.161308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.161350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.161572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.161614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.161820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.161864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.162039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.162064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.162234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.162277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.162475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.162501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.162670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.162695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.162838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.162863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.163072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.163098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.163303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.163348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.163523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.163571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.163779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.163820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.164015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.164041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.164216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.164244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.164419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.164462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.164654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.164683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.164873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.164900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.165077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.165119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.165313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.165339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.165531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.165573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.165767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.165795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.165993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.166018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.166255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.166296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.166497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.166525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.166736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.166765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.166930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.166955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.167156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.167198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.167397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.167425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.167646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.167673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.167851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.167876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.168024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.168050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.168249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.168294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.168494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.168539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.168687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-07-25 19:18:27.168713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-07-25 19:18:27.168856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.168885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.169032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.169058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.169285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.169329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.169502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.169545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.169745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.169788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.169965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.169990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.170204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.170248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.170448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.170492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.170659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.170702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.170885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.170911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.171067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.171091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.171270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.171313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.171517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.171559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.171734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.171777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.171953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.171980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.172174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.172219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.172392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.172439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.172629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.172671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.172812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.172837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.173012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.173038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.173241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.173285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.173485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.173528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.173717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.173745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.173933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.173959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.174136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.174162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.174375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.174403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.174620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.174663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.174863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.174890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.175041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.175067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.175262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.175306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.175504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-07-25 19:18:27.175547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-07-25 19:18:27.175728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.175754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.175954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.175979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.176166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.176210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.176389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.176437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.176610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.176654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.176831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.176857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.177008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.177034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.177262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.177305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.177528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.177570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.177771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.177818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.177991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.178017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.178212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.178256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.178452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.178497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.178721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.178764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.178909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.178934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.179076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.179107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.179311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.179354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.179559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.179601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.179822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.179865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.180016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.180041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.180229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.180275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.180450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.180498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.180699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.180741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.180945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.180971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.181184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.181213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.181403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.181444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.181663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.181706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.181854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.181882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.182077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.182118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.182265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.182291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.182483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.182526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.182785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.182841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.183038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.183064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.183242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-07-25 19:18:27.183286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-07-25 19:18:27.183504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.183546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.183743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.183786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.183961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.183987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.184199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.184244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.184432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.184476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.184829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.184888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.185085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.185115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.185296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.185339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.185533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.185576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.185775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.185818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.185991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.186017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.186256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.186285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.186470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.186513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.186736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.186781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.186956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.186981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.187174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.187222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.187397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.187439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.187731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.187781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.187954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.187981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.188140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.188168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.188340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.188384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.188583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.188625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.188793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.188818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.188991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.189015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.189211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.189258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.189488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.189539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.189763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.189806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.189984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.190010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-07-25 19:18:27.190226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-07-25 19:18:27.190270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.190519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.190588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.190787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.190831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.191007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.191033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.191233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.191277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.191461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.191504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.191668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.191712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.191878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.191904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.192099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.192147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.192320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.192368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.192589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.192632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.192826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.192869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.193066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.193091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.193281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.193325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.193532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.193576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.193771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.193814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.193961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.193987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.194209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.194253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.194425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.194469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.194651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.194694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.194839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.194865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.195007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.195032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.195196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.195239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.195442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.195485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.195704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.195747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.195918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.195944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.196091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.196136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.196328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.196359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.196587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.196630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.196825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.196868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.197015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.197041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.197236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.197279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.197508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.197550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.197740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.197783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-07-25 19:18:27.197933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-07-25 19:18:27.197960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.198153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.198183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.198421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.198464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.198669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.198715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.198886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.198911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.199064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.199090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.199285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.199329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.199617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.199676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.199874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.199917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.200093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.200131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.200330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.200378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.200549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.200594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.200760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.200803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.200974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.201001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.201197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.201241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.201435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.201464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.201704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.201747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.201943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.201969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.202114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.202140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.202334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.202380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.202580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.202623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.202799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.202842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.202995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.203021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.203232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.203260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.203428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.203471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.203667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.203710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.203873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.203899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.204095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.204133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.204312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.204356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.204540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.204584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.204782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.204812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.205001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.205028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.205230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.205273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.205479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.205527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.205689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.205731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.205904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.205930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.206080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-07-25 19:18:27.206111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-07-25 19:18:27.206316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.206358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.206547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.206591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.206767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.206810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.207009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.207035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.207237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.207282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.207507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.207551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.207865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.207909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.208120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.208148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.208346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.208392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.208614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.208658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.208920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.208978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.209151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.209180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.209391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.209434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.209614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.209657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.209870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.209897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.210049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.210075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.210298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.210343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.210532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.210574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.210767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.210796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.210989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.211015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.211238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.211282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.211496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.211523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.211720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.211763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.211947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.211973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.212153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.212187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.212362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.212405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.212662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.212705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.212879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.212904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.213052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.213077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.213286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.213329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.213528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.213556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-07-25 19:18:27.213759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-07-25 19:18:27.213786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.213961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.213986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.214203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.214245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.214441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.214469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.214676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.214719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.214890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.214921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.215092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.215126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.215323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.215366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.215590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.215634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.215835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.215878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.216054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.216079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.216287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.216332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.216507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.216550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.216776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.216805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.216965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.216990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.217176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.217220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.217364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.217390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.217543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.217569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.217794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.217837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.218033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.218059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.218252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.218295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.218484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.218512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.218701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.218747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.218915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.218940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.219118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.219145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.219344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.219372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.219611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.219654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.219885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.219914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.220078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.220107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.220285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.220310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.220480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.220524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.220730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.220774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.220957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.220983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.221177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.221206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.221422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.221465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.221645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.031 [2024-07-25 19:18:27.221691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.031 qpair failed and we were unable to recover it. 00:27:35.031 [2024-07-25 19:18:27.221890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.221915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.222094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.222134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.222338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.222382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.222681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.222710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.222924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.222953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.223144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.223171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.223372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.223415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.223747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.223796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.223966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.223992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.224192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.224240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.224444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.224488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.224737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.224781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.224955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.224982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.225124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.225151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.225359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.225386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.225582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.225624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.225777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.225801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.225972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.225998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.226194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.226238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.226416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.226467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.226666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.226711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.226863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.226889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.227085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.227116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.227312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.227356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.227578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.227621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.227846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.227888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.228060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.228086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.032 [2024-07-25 19:18:27.228293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.032 [2024-07-25 19:18:27.228336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.032 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.228515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.228561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.228790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.228833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.229008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.229034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.229258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.229303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.229472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.229515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.229717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.229760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.229940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.229965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.230154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.230183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.230390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.230433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.230660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.230705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.230878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.230904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.231106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.231132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.231309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.231352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.231545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.231573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.231811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.231854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.232053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.232078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.232248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.232292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.232470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.232513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.232705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.232734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.232947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.232993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.233174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.233202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.233400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.233446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.233624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.233669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.233898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.233924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.234109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.234135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.234313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.234338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.234539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.234585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.234842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.234886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.235060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.235085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.235245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.235270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.235444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.235486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.235709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.235753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.235979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.236022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.236202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.236229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.236395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.236438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.236651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.236695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.033 qpair failed and we were unable to recover it. 00:27:35.033 [2024-07-25 19:18:27.236871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.033 [2024-07-25 19:18:27.236896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.237046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.237071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.237287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.237330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.237521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.237564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.237736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.237780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.237953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.237979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.238126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.238152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.238322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.238366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.238554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.238598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.238776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.238802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.238972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.238998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.239166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.239210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.239408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.239451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.239643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.239687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.239884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.239910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.240052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.240078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.240277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.240320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.240486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.240529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.240712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.240755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.240924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.240949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.241129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.241156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.241326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.241369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.241549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.241596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.241791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.241834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.242030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.242056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.242225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.242298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.242522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.242565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.242814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.242840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.242992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.243018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.243209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.243253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.243443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.243487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.243626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.243653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.243829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.243854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.244027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.244052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.244239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.244267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.034 qpair failed and we were unable to recover it. 00:27:35.034 [2024-07-25 19:18:27.244462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.034 [2024-07-25 19:18:27.244504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.244726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.244769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.244945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.244971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.245202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.245246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.245482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.245535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.245756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.245799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.245986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.246013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.246178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.246222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.246417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.246446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.246660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.246702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.246904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.246929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.247137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.247163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.247378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.247414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.247640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.247684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.247833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.247860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.248029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.248055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.248228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.248271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.248453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.248496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.248695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.248737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.248933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.248958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.249188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.249232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.249479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.249523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.249726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.249768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.249918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.249945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.250122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.250148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.250311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.250353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.250523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.250566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.250764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.250808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.250960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.250987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.251177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.251222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.251412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.251459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.251653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.251697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.251844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.251870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.252020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.252046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.252264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.252308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.035 [2024-07-25 19:18:27.252534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.035 [2024-07-25 19:18:27.252577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.035 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.252849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.252907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.253130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.253173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.253378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.253431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.253640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.253667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.253826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.253852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.254046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.254072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.254253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.254296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.254493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.254535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.254755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.254798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.254951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.254976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.255125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.255153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.255356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.255399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.255738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.255797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.255969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.255995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.256167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.256196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.256381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.256428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.256625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.256667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.256903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.256928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.257139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.257165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.257344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.257386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.257689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.257749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.257933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.257959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.258114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.258140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.258338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.258382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.258591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.258634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.258855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.258897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.259081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.259121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.259310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.259354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.259634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.259685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.259900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.259928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.260129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.260155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.260329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.260373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.260563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.260589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.260800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.260844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.261020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.036 [2024-07-25 19:18:27.261053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.036 qpair failed and we were unable to recover it. 00:27:35.036 [2024-07-25 19:18:27.261241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.261288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.261487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.261537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.261757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.261803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.261988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.262014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.262219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.262264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.262455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.262500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.262698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.262743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.262935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.262963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.263119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.263144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.263327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.263375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.263580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.263627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.263777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.263803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.263975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.264002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.264202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.264250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.264459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.264502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.264703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.264748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.264921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.264947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.265135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.265167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.265344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.265388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.265572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.265617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.265821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.265851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.266071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.266098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.266310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.266355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.266536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.266566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.266797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.266824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.266978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.267006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.267219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.267265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.267440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.037 [2024-07-25 19:18:27.267489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.037 qpair failed and we were unable to recover it. 00:27:35.037 [2024-07-25 19:18:27.267696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.267740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.267889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.267915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.268075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.268100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.268309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.268357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.268586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.268631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.268852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.268897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.269064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.269089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.269319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.269364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.269566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.269610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.269754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.269781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.269934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.269962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.270159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.270200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.270423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.270451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.270627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.270653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.270803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.270832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.271005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.271031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.271221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.271264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.271491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.271535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.271761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.271810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.271989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.272015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.272211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.272257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.272433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.272465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.272675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.272720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.272895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.272921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.273100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.273137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.273329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.273373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.273554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.273598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.273792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.273834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.273996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.274024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.274206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.274255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.274461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.274504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.274721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.274765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.274941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.274973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.275176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.275221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.038 [2024-07-25 19:18:27.275426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.038 [2024-07-25 19:18:27.275470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.038 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.275693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.275738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.275910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.275939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.276107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.276135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.276317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.276355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.276551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.276595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.276789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.276834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.277006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.277032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.277236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.277283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.277461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.277506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.277705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.277751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.277912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.277938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.278130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.278159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.278335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.278367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.278549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.278593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.278798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.278831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.279001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.279029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.279185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.279211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.279394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.279439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.279620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.279664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.279865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.279914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.280091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.280128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.280290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.280316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.280524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.280571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.280773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.280815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.280997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.281024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.281186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.281217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.281423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.281473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.281672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.281701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.281929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.281956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.282129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.282157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.282334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.282378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.282615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.282658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.282838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.282881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.283033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.283058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.283270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.283296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.283523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.039 [2024-07-25 19:18:27.283566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.039 qpair failed and we were unable to recover it. 00:27:35.039 [2024-07-25 19:18:27.283767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.283795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.283983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.284008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.284199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.284244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.284446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.284489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.284664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.284711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.284886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.284912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.285057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.285084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.285302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.285351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.285515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.285556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.285742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.285785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.285987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.286013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.286209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.286254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.286476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.286519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.286744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.286788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.286956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.286983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.287173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.287216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.287403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.287431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.287652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.287695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.287873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.287898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.288040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.288066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.288264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.288308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.288490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.288533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.288823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.288876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.289070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.289096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.289328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.289371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.289570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.289613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.289798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.289826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.289972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.289999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.290233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.290278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.290506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.290533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.290748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.290777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.290986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.291011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.291196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.291239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-25 19:18:27.291424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.040 [2024-07-25 19:18:27.291468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.291649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.291694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.291840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.291865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.292037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.292063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.292263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.292307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.292585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.292643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.292862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.292891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.293083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.293125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.293332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.293380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.293640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.293684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.293907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.293950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.294095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.294131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.294326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.294370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.294558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.294601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.294800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.294847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.295021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.295047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.295240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.295269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.295454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.295496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.295672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.295713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.295943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.295986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.296176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.296228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.296450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.296496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.296698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.296739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.296914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.296940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.297138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.297164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.297554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.297598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.297796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.297825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.297985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.298011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.298219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.298263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.298516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.298574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.298780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.298807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.299002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.299028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.299213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.041 [2024-07-25 19:18:27.299258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-25 19:18:27.299588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.299637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.299848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.299874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.300048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.300074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.300286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.300331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.300610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.300660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.300859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.300902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.301081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.301125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.301306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.301349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.301559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.301601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.301776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.301820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.301961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.301985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.302175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.302219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.302398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.302442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.302681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.302724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.302872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.302899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.303070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.303096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.303310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.303353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.303569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.303611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.303803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.303846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.304017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.304045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.304228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.304274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.304469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.304518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.304720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.304762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.304935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.304962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.305119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.305146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.305371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.305412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.305648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.305691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.305868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.305894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.042 [2024-07-25 19:18:27.306085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.042 [2024-07-25 19:18:27.306117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.042 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.306312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.306358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.306563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.306606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.306790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.306836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.307011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.307036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.307247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.307292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.307493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.307521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.307726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.307769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.307966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.307991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.308160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.308189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.308383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.308436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.308659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.308703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.308876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.308902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.309096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.309137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.309338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.309366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.309550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.309596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.309806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.309851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.310053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.310078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.310242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.310268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.310492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.310535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.310824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.310874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.311049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.311075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.311260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.311287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.311478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.311521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.311733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.311776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.311927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.311952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.312131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.312173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.312343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.312386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.312559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.312602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.312825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.312869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.313052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.313079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.313306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.313350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.313661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.043 [2024-07-25 19:18:27.313723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-07-25 19:18:27.313953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.314002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.314226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.314269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.314509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.314553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.314742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.314785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.314938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.314965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.315154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.315184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.315404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.315447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.315688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.315732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.315904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.315930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.316108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.316135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.316356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.316399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.316688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.316735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.316890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.316916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.317090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.317125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.317350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.317393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.317626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.317671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.317838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.317882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.318056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.318081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.318267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.318310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.318520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.318564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.318761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.318804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.318999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.319024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.319223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.319266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.319433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.319476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.319653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.319696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.319869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.319895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.320065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.320090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.320302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.320345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.320538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.320581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.320772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.320800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.320985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.321012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.321245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.321290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.321495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.321538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.321733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.321777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.321948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.321975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-07-25 19:18:27.322169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.044 [2024-07-25 19:18:27.322198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.322418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.322461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.322690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.322718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.322905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.322930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.323137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.323164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.323365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.323412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.323609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.323653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.323836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.323866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.324039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.324066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.324278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.324322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.324478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.324504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.324702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.324745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.324896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.324922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.325086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.325119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.325325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.325352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.325552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.325595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.325913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.325973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.326130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.326157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.326309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.326352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.326532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.326575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.326745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.326789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.326987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.327013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.331322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.331363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.331562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.331591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.331817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.331860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.332033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.332059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.332217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.332244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.332394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.332421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.332602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.332645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.332870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.332914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.333100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.333132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.333311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.333355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.333694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.333756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-07-25 19:18:27.333931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.045 [2024-07-25 19:18:27.333958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.334154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.334198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.334391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.334435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.334640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.334684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.334859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.334885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.335079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.335122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.335323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.335367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.335548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.335594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.335800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.335843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.335997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.336023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.336203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.336246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.336424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.336469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.336665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.336713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.336909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.336934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.337081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.337121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.337291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.337334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.337521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.337564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.337732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.337775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.337945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.337970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.338165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.338193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.338413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.338453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.338650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.338678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.338892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.338934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.339115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.339143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.339296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.339322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.339519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.339565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.339791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.339835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.339986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.340011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.340246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.340289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.340470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.340519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.340719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.340762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.340905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.340932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.341130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.341156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.341351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.341395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.341557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.341599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.046 qpair failed and we were unable to recover it. 00:27:35.046 [2024-07-25 19:18:27.341832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.046 [2024-07-25 19:18:27.341881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.342055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.342081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.342283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.342327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.342548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.342591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.342887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.342930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.343098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.343131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.343326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.343352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.343551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.343599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.343914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.343974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.344155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.344182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.344385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.344428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.344648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.344691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.344910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.344953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.345131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.345178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.345403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.345445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.345672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.345715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.345889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.345915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.346109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.346139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.346316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.346360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.346544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.346587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.346789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.346833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.346982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.347009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.347202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.347249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.347449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.347492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.347753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.347796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.347995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.348020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.348184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.348211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.348386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.348428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.348632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.348659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.348839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.348882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.349036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.047 [2024-07-25 19:18:27.349063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.047 qpair failed and we were unable to recover it. 00:27:35.047 [2024-07-25 19:18:27.349283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.349326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.349620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.349673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.349859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.349887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.350071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.350098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.350294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.350336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.350537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.350566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.350774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.350817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.351013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.351038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.351224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.351250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.351472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.351533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.351699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.351744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.351940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.351966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.352157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.352187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.352436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.352479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.352712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.352755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.352923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.352950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.353091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.353122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.353325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.353369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.353535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.353578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.353783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.353826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.354000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.354026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.354233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.354278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.354474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.354516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.354733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.354777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.354944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.354970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.355191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.355236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.355425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.355471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.355674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.355718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.355896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.355923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.356124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.356150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.356357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.356401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.356624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.356667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.356861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.356905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.357080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.357121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.357331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.048 [2024-07-25 19:18:27.357358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.048 qpair failed and we were unable to recover it. 00:27:35.048 [2024-07-25 19:18:27.357538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.357580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.357768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.357812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.358010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.358036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.358253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.358297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.358476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.358518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.358746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.358789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.358961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.358988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.359186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.359229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.359409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.359452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.359651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.359679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.359895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.359920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.360119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.360145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.360343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.360385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.360610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.360654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.360987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.361032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.361229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.361273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.361505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.361549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.361740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.361783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.361939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.361966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.362191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.362235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.362458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.362501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.362727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.362770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.362947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.362975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.363193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.363236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.363454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.363496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.363697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.363726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.363937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.363963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.364147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.364177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.364368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.364414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.364611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.364654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.364806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.364832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.365001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.365032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.365243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.365270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.365463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.365507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.365777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.365835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.366038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.366064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.366294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.049 [2024-07-25 19:18:27.366338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.049 qpair failed and we were unable to recover it. 00:27:35.049 [2024-07-25 19:18:27.366548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.366590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.366832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.366861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.367051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.367076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.367254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.367296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.367499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.367542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.367755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.367797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.367949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.367974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.368129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.368157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.368335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.368378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.368574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.368603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.368808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.368851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.369002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.369028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.369234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.369278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.369440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.369484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.369657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.369699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.369898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.369923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.370094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.370135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.370308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.370351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.370519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.370562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.370729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.370772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.370952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.370977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.371133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.371160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.371357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.371400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.371634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.371678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.371873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.371899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.372069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.372095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.372303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.372346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.372550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.372593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.372765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.372791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.372990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.373015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.373226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.373269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.050 [2024-07-25 19:18:27.373499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.050 [2024-07-25 19:18:27.373542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.050 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.373757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.373800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.374008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.374033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.374229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.374278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.374486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.374528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.374756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.374799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.374967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.374993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.375219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.375263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.375489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.375532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.375735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.375779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.375923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.375948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.376128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.376154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.376350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.376393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.376595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.376637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.376822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.376868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.377064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.377090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.377255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.377281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.377510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.377553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.377831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.377884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.378092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.378135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.378312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.378337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.378532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.378559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.378923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.378979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.379156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.379183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.379392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.379436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.379632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.379675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.379876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.379919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.380098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.380129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.380302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.380328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.380548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.380591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.380767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.380810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.380963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.380989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.381183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.381228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.381432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.381475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.381641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.051 [2024-07-25 19:18:27.381685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.051 qpair failed and we were unable to recover it. 00:27:35.051 [2024-07-25 19:18:27.381854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.381880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.382055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.382081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.382284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.382330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.382519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.382562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.382745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.382791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.382963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.382990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.383207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.383251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.383520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.383584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.383807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.383854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.384052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.384078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.384266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.384310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.384538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.384583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.384807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.384850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.385005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.385032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.385233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.385277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.385485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.385528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.385698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.385742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.385940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.385967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.386195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.386239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.386456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.386511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.386701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.386745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.386942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.386968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.387158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.387188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.387390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.387433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.387580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.387607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.387807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.387834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.388035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.388061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.388242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.388286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.388496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.388523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.388714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.388757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.052 [2024-07-25 19:18:27.388907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.052 [2024-07-25 19:18:27.388934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.052 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.389128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.389154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.389353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.389399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.389596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.389640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.389839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.389882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.390029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.390054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.390253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.390297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.390489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.390532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.390758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.390801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.390956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.390982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.391127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.391154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.391330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.391374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.391597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.391641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.392004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.392068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.392299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.392342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.392547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.392591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.392781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.392825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.392993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.393019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.393190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.393238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.393412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.393455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.393657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.393700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.393927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.393971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.394147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.394175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.394371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.394400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.394650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.394693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.394894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.394919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.395099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.395133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.395329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.395373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.395567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.395598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.395780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.395823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.395990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.396017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.396200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.396244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.396418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.396461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.396632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.396675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.396846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.396872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.397067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.397093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.397299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.053 [2024-07-25 19:18:27.397341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.053 qpair failed and we were unable to recover it. 00:27:35.053 [2024-07-25 19:18:27.397656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.397717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.397922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.397949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.398150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.398176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.398357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.398400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.398601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.398630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.398834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.398877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.399053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.399079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.399290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.399333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.399584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.399628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.399801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.399844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.400006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.400032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.400214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.400268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.400546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.400598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.400823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.400866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.401074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.401099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.401316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.401342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.401677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.401726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.401921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.401966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.402114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.402142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.402344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.402388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.402667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.402724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.402901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.402930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.403126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.403153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.403295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.403320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.403545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.403588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.403761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.403805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.404002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.404028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.404254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.404297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.404556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.404610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.404801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.404844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.405012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.405037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.405234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.405260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.405426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.405469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.405646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.405689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.405924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.405953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.054 qpair failed and we were unable to recover it. 00:27:35.054 [2024-07-25 19:18:27.406161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.054 [2024-07-25 19:18:27.406206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.406410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.406453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.406620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.406664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.406833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.406859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.407031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.407058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.407247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.407291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.407508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.407550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.407775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.407817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.407991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.408017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.408232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.408276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.408473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.408516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.408714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.408759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.408928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.408954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.409093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.409127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.409329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.409372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.409544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.409587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.409762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.409805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.409983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.410009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.410231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.410276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.410507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.410550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.410764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.410791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.410964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.410991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.411214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.411258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.411473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.411516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.411740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.411784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.411939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.411967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.412156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.412186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.412406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.412449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.412674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.412718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.412887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.412913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.413083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.413114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.413309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.413356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.413553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.413582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.413848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.413876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.414050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.414076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.414272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.055 [2024-07-25 19:18:27.414300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.055 qpair failed and we were unable to recover it. 00:27:35.055 [2024-07-25 19:18:27.414469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.414513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.414757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.414829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.415025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.415050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.415242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.415288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.415479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.415508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.415696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.415723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.415867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.415894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.416108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.416135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.416333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.416376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.416610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.416654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.416852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.416895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.417094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.417126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.417300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.417326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.417607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.417669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.417872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.417916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.418060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.418086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.418265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.418290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.418584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.418645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.418842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.418885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.419085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.419116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.419314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.419340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.419561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.419589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.419801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.419844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.420044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.420070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.420252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.420296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.420462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.420504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.420705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.420749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.420929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.420955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.421127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.421155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.421343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.421387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.421565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.421611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.421817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.421859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.422036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.422062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.422262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.422307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.422525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.422568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.056 [2024-07-25 19:18:27.422762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.056 [2024-07-25 19:18:27.422791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.056 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.422982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.423008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.423166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.423195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.423390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.423434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.423658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.423700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.423887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.423912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.424080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.424119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.424358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.424401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.424628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.424671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.424852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.424895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.425040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.425067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.425269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.425314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.425519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.425562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.425756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.425799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.425946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.425971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.426149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.426177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.426360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.426408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.426580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.426628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.426873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.426925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.427084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.427115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.427322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.427366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.427565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.427609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.427807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.427856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.428032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.428059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.428258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.428303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.428501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.428545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.428710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.428754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.428933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.428959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.429111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.429138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.057 [2024-07-25 19:18:27.429330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.057 [2024-07-25 19:18:27.429377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.057 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.429753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.429806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.429993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.430019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.430226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.430271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.430481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.430524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.430714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.430757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.430913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.430949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.431171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.431201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.431414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.431443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.431929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.431957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.432141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.432169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.432370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.432416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.432636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.432680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.432882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.432908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.433058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.433085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.433269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.433314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.433486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.433529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.433768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.433818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.434000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.434026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.434198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.434241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.434407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.434451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.434688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.434739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.434917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.434944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.435118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.435145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.435315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.435359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.435606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.435654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.435832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.435875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.436052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.436077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.436281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.436325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.436525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.436567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.436735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.436777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.436954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.436980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.437133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.437161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.437364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.437413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.437614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.437658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.437832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.437858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.438012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.438039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.058 qpair failed and we were unable to recover it. 00:27:35.058 [2024-07-25 19:18:27.438244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.058 [2024-07-25 19:18:27.438274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.438459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.438502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.438723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.438766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.438939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.438964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.439141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.439168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.439350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.439394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.439569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.439613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.439750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.439775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.439968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.439993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.440193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.440238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.440421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.440464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.440640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.440689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.440886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.440911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.441108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.441134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.441299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.441342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.441570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.441613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.441808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.441852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.442026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.442052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.442236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.442280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.442445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.442488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.442663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.442707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.442883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.442910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.443082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.443123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.443308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.443352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.443586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.443628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.443830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.443859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.444048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.444074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.444268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.444295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.444534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.444577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.444802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.444844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.445015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.445041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.445249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.445276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.445478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.445524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.445723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.445766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.445963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.445989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.059 qpair failed and we were unable to recover it. 00:27:35.059 [2024-07-25 19:18:27.446192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-07-25 19:18:27.446237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.446411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.446458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.446656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.446700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.446889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.446916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.447094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.447125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.447326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.447355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.447540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.447583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.447754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.447798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.447971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.447996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.448178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.448223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.448394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.448440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.448634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.448677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.448823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.448849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.449028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.449054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.449231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.449275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.449453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.449496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.449662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.449706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.449902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.449928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.450071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.450099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.450284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.450328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.450535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.450578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.450755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.450780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.450924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.450951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.451150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.451194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.451397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.451440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.451689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.451761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.451938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.451964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.452116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.452142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.452324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.452371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.452658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.452706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.452905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.452931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.453113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.453141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.453368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.453411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.453714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.453764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.453971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.454014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.454194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.060 [2024-07-25 19:18:27.454221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.060 qpair failed and we were unable to recover it. 00:27:35.060 [2024-07-25 19:18:27.454417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.454461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.454687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.454731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.454908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.454934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.455107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.455134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.455313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.455338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.455605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.455659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.455836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.455879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.456049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.456075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.456229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.456255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.456429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.456472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.456669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.456712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.456892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.456935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.457084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.457115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.457311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.457355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.457552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.457580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.457790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.457834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.458004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.458029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.458197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.458241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.458409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.458451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.458634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.458678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.458887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.458915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.459058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.459084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.459291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.459335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.459514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.459557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.459726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.459768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.459941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.459967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.460163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.460194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.460419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.460446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.460642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.460686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.460839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.460865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.461031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.461057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.461259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.461303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.461529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.461573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1018890 Killed "${NVMF_APP[@]}" "$@" 00:27:35.061 [2024-07-25 19:18:27.461774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 [2024-07-25 19:18:27.461818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.061 qpair failed and we were unable to recover it. 00:27:35.061 [2024-07-25 19:18:27.462013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.061 19:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:35.061 [2024-07-25 19:18:27.462039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-07-25 19:18:27.462222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 19:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:35.062 [2024-07-25 19:18:27.462267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 19:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:35.062 [2024-07-25 19:18:27.462430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-07-25 19:18:27.462473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 19:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:35.062 [2024-07-25 19:18:27.462667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-07-25 19:18:27.462710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 19:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.062 [2024-07-25 19:18:27.462914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-07-25 19:18:27.462940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-07-25 19:18:27.463120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-07-25 19:18:27.463147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-07-25 19:18:27.463341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-07-25 19:18:27.463385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-07-25 19:18:27.463756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-07-25 19:18:27.463816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-07-25 19:18:27.463987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-07-25 19:18:27.464014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-07-25 19:18:27.464197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-07-25 19:18:27.464242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-07-25 19:18:27.464458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-07-25 19:18:27.464503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-07-25 19:18:27.464725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-07-25 19:18:27.464770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-07-25 19:18:27.464953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-07-25 19:18:27.464980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-07-25 19:18:27.465185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-07-25 19:18:27.465229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-07-25 19:18:27.465425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-07-25 19:18:27.465469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-07-25 19:18:27.465729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-07-25 19:18:27.465787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-07-25 19:18:27.465953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-07-25 19:18:27.465979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-07-25 19:18:27.466133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-07-25 19:18:27.466162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 19:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1019581 00:27:35.062 19:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:35.062 19:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1019581 00:27:35.062 [2024-07-25 19:18:27.466367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-07-25 19:18:27.466411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 19:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1019581 ']' 00:27:35.062 [2024-07-25 19:18:27.466575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-07-25 19:18:27.466618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 19:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.062 19:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:35.062 [2024-07-25 19:18:27.466846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 19:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.062 [2024-07-25 19:18:27.466891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 19:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:35.062 [2024-07-25 19:18:27.467062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 19:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.062 [2024-07-25 19:18:27.467089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.062 qpair failed and we were unable to recover it. 00:27:35.062 [2024-07-25 19:18:27.467284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.062 [2024-07-25 19:18:27.467330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-07-25 19:18:27.467531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-07-25 19:18:27.467575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-07-25 19:18:27.467776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-07-25 19:18:27.467820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-07-25 19:18:27.467967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-07-25 19:18:27.467992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-07-25 19:18:27.468166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-07-25 19:18:27.468196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-07-25 19:18:27.468412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-07-25 19:18:27.468457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-07-25 19:18:27.468679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-07-25 19:18:27.468723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-07-25 19:18:27.468883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-07-25 19:18:27.468909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-07-25 19:18:27.469081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-07-25 19:18:27.469200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-07-25 19:18:27.469407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-07-25 19:18:27.469451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-07-25 19:18:27.469674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-07-25 19:18:27.469719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-07-25 19:18:27.469916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-07-25 19:18:27.469962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-07-25 19:18:27.470132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-07-25 19:18:27.470159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-07-25 19:18:27.470335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-07-25 19:18:27.470381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-07-25 19:18:27.470608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-07-25 19:18:27.470651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-07-25 19:18:27.470852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-07-25 19:18:27.470881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-07-25 19:18:27.471067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-07-25 19:18:27.471096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-07-25 19:18:27.471311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-07-25 19:18:27.471338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-07-25 19:18:27.471548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-07-25 19:18:27.471593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-07-25 19:18:27.471785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-07-25 19:18:27.471828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-07-25 19:18:27.472002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-07-25 19:18:27.472028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-07-25 19:18:27.472184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-07-25 19:18:27.472211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-07-25 19:18:27.472419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-07-25 19:18:27.472463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-07-25 19:18:27.472668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-07-25 19:18:27.472715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-07-25 19:18:27.472908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-07-25 19:18:27.472952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.063 [2024-07-25 19:18:27.473164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.063 [2024-07-25 19:18:27.473209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.063 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.473386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.473429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.473634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.473678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.473881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.473924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.474081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.474131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.474344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.474374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.474577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.474620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.474827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.474854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.475040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.475067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.475314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.475358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.475572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.475603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.475797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.475840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.475996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.476023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.476217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.476262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.476430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.476472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.476666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.476710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.476908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.476934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.477111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.477137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.477337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.477363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.477545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.477588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.477756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.477803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.477982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.478008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.478190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.478217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.478431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.478476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.478719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.478762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.478904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.478930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.479144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.479174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.479417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.479459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.479675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.479718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.479896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.479922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.480120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.480146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.480323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.480366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.480560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.480588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.345 [2024-07-25 19:18:27.480845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.345 [2024-07-25 19:18:27.480873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.345 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.481051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.481076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.481233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.481259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.481448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.481491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.481684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.481728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.481951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.481980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.482219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.482265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.482458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.482488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.482674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.482716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.482877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.482905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.483058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.483092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.483297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.483342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.483521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.483548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.483723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.483766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.483939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.483965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.484148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.484178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.484366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.484410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.484633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.484681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.484882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.484908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.485055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.485081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.485269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.485312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.485501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.485529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.485845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.485896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.486096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.486138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.486311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.486341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.486551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.486580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.486764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.486808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.486981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.487007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.487203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.487247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.487420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.487464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.487661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.487705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.487882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.487907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.488070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.488096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.488313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.488356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.488557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.488601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.488777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.488823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.488997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.489023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.346 [2024-07-25 19:18:27.489236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.346 [2024-07-25 19:18:27.489280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.346 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.489478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.489507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.489746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.489790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.489987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.490013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.490236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.490281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.490507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.490551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.490782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.490826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.490976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.491003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.491212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.491257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.491454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.491497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.491696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.491740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.491943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.491968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.492197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.492241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.492449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.492493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.492673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.492717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.492891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.492917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.493086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.493125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.493322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.493351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.493601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.493644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.493955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.493997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.494198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.494230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.494427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.494476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.494683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.494727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.494932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.494958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.495154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.495201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.495405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.495449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.495645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.495688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.495861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.495887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.496058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.496092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.496268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.496312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.496515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.496560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.496759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.496803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.496978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.497006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.497174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.497218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.497389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.497433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.497601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.497646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.497821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.497849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.498050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.347 [2024-07-25 19:18:27.498076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.347 qpair failed and we were unable to recover it. 00:27:35.347 [2024-07-25 19:18:27.498252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.498296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.498589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.498646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.498867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.498911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.499087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.499126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.499302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.499328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.499636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.499665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.499872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.499915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.500087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.500121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.500276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.500302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.500535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.500577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.500800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.500843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.500995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.501022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.501243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.501287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.501542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.501590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.501811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.501853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.502004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.502030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.502206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.502234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.502459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.502503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.502704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.502747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.502947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.502973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.503174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.503221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.503408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.503454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.503649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.503698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.503869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.503896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.504096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.504138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.504329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.504375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.504538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.504582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.504808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.504851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.505025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.505050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.505262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.505289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.505495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.505538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.505682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.505709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.505887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.505914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.506084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.506127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.506327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.506370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.506565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.506608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.506825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.506868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.507041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.348 [2024-07-25 19:18:27.507067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.348 qpair failed and we were unable to recover it. 00:27:35.348 [2024-07-25 19:18:27.507303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.507346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.507547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.507591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.507790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.507833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.507987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.508013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.508210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.508255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.508427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.508470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.508637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.508680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.509012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.509065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.509262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.509307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.509505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.509533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.509769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.509812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.510023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.510048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.510235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.510280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.510469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.510514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.510682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.510726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.510903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.510929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.511100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.511131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.511329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.511373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.511640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.511683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.511887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.511932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.512131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.512157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.512161] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:35.349 [2024-07-25 19:18:27.512250] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:35.349 [2024-07-25 19:18:27.512326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.512369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.512549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.512592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.512774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.512823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.513009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.513035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.513188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.513215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.513439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.513483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.513655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.349 [2024-07-25 19:18:27.513683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.349 qpair failed and we were unable to recover it. 00:27:35.349 [2024-07-25 19:18:27.513904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.513931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.514110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.514143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.514367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.514410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.514695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.514752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.514975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.515019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.515171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.515198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.515401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.515445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.515634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.515677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.515877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.515920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.516122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.516149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.516345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.516392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.516643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.516686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.516914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.516957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.517191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.517218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.517445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.517487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.517805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.517860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.518059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.518084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.518242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.518269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.518462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.518505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.518815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.518858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.519065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.519092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.519283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.519310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.519511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.519555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.519749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.519778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.519965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.519990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.520137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.520163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.520358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.520402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.520598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.520642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.520811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.520837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.521009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.521034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.521206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.521249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.521433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.521460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.521685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.521727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.521904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.521931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.522128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.522156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.522326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.522373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.350 [2024-07-25 19:18:27.522599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.350 [2024-07-25 19:18:27.522643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.350 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.522844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.522889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.523059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.523085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.523289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.523332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.523503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.523546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.523888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.523942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.524139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.524166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.524387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.524429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.524614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.524658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.524859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.524903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.525055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.525081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.525254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.525298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.525464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.525509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.525711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.525754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.525928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.525954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.526179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.526208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.526405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.526447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.526672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.526715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.526890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.526916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.527087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.527130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.527335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.527379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.527575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.527619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.527814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.527843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.528038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.528065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.528244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.528288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.528456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.528501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.528678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.528722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.528945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.528988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.529141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.529168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.529370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.529414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.529579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.529623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.529847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.529892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.530064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.530091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.530281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.530330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.530524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.530567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.530787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.530831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.531029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.531056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.531245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.351 [2024-07-25 19:18:27.531289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.351 qpair failed and we were unable to recover it. 00:27:35.351 [2024-07-25 19:18:27.531490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.531534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.531757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.531803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.531954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.531979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.532198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.532240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.532409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.532452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.532620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.532664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.532843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.532869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.533042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.533068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.533268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.533312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.533536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.533580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.533797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.533840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.534010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.534036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.534232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.534278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.534502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.534544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.534732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.534776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.534935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.534961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.535113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.535139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.535361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.535389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.535606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.535649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.535848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.535874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.536043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.536070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.536278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.536322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.536494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.536538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.536758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.536803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.536999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.537024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.537254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.537299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.537473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.537521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.537720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.537763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.537921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.537947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.538116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.538147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.538344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.538370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.538559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.538587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.538812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.538856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.539071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.539097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.539288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.539315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.539492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.539535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.539766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.352 [2024-07-25 19:18:27.539809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.352 qpair failed and we were unable to recover it. 00:27:35.352 [2024-07-25 19:18:27.539987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.540015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.540210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.540255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.540461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.540505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.540707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.540750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.540923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.540954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.541179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.541224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.541445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.541488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.541646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.541688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.541862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.541889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.542047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.542073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.542310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.542355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.542554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.542598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.542781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.542808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.542986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.543011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.543241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.543287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.543509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.543553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.543724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.543769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.543917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.543944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.544149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.544176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.544372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.544416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.544643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.544686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.544826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.544852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.545005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.545031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.545202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.545246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.545473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.545516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.545718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.545762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.545913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.545940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.546086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.546125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.546358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.546408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.546581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.546628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.546781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.546807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.546988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.547015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.547216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.547262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.547452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.547482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.547694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.547737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.547907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.547933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.548088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.548119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.353 [2024-07-25 19:18:27.548319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.353 [2024-07-25 19:18:27.548363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.353 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.548513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.548541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.548714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.548758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.548901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.548927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.549107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.549133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.549311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.549355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.549566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.549593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.549781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.549828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.549978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.550004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.550197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.550243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.550447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.550490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.550685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.550713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.550925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.550952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.551117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.551144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.551366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.551410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.551629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.551672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.551845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.551887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.552037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.552065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.552307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.552351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.552556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.552600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.552770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.552814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.552993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.553019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.553197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.553241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.553440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.553484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.553682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.553725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.354 [2024-07-25 19:18:27.553905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.553931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.554105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.554139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.554338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.554364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.554556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.554598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.554764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.554807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.554979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.354 [2024-07-25 19:18:27.555005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.354 qpair failed and we were unable to recover it. 00:27:35.354 [2024-07-25 19:18:27.555204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.555251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.555448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.555477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.555697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.555742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.555943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.555969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.556116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.556143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.556333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.556360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.556559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.556603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.556830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.556873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.557035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.557062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.557223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.557250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.557405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.557431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.557597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.557623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.557844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.557871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.558044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.558070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.558257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.558285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.558463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.558490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.558641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.558671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.558814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.558840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.559008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.559034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.559188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.559216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.559367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.559394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.559594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.559620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.559777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.559803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.559949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.559976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.560158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.560185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.560362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.560388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.560591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.560617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.560814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.560840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.560980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.561006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.561156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.561183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.561340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.561367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.561509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.561535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.561682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.561709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.561888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.561915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.562091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.562131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.562329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.562356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.562521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.562547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.355 [2024-07-25 19:18:27.562711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.355 [2024-07-25 19:18:27.562737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.355 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.562909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.562934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.563076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.563116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.563311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.563338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.563484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.563510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.563686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.563712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.563887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.563914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.564084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.564116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.564269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.564296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.564474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.564500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.564669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.564694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.564866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.564892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.565042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.565068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.565251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.565278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.565423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.565449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.565618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.565644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.565841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.565867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.566011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.566037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.566202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.566233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.566410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.566441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.566638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.566664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.566832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.566858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.567032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.567057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.567221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.567248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.567414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.567440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.567588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.567614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.567785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.567812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.567989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.568015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.568213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.568240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.568415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.568441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.568613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.568640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.568840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.568866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.569013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.569040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.569247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.569274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.569449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.569476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.569669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.356 [2024-07-25 19:18:27.569695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.356 qpair failed and we were unable to recover it. 00:27:35.356 [2024-07-25 19:18:27.569847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.569873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.570020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.570046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.570235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.570263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.570439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.570465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.570611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.570636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.570811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.570837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.570989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.571015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.571162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.571189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.571363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.571390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.571570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.571596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.571747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.571775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.571946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.571973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.572113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.572144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.572314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.572340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.572510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.572536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.572690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.572716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.572912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.572937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.573113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.573140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.573316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.573341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.573540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.573566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.573704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.573731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.573880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.573906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.574112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.574139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.574314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.574344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.574488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.574514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.574658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.574685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.574827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.574853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.575027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.575053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.575212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.575239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.575409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.575436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.575605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.575630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.575806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.575831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.576005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.576033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.576212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.576239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.576412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.576438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.576581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.576607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.576755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.576781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.576943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.357 [2024-07-25 19:18:27.576969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.357 qpair failed and we were unable to recover it. 00:27:35.357 [2024-07-25 19:18:27.577139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.577165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.577341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.577367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.577506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.577531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.577688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.577713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.577892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.577918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.578067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.578094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.578270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.578297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.578454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.578480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.578629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.578654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.578797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.578824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.578972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.578998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.579153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.579180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.579356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.579383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.579553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.579579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.579725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.579751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.579926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.579953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.580127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.580155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.580298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.580325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.580498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.580524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.580724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.580750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.580896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.580922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.581070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.581095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.581248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.581274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.581449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.581475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.581647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.581673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.581818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.581847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.581992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.582018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.582190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.582217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.582387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.582413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.582552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.582578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.582750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.582776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.582926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.582954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.583153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.583180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.583352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.583378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.583553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.583579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.583752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.583779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.583943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.583968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.358 qpair failed and we were unable to recover it. 00:27:35.358 [2024-07-25 19:18:27.584133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.358 [2024-07-25 19:18:27.584160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.584313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.584340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.584545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.584570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.584718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.584743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.584912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.584938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.585109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.585136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.585316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.585344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.585543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.585569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.585714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.585741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.585889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.585915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.586056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.586081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.586287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.586312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.586463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.586489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.586644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.586670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.586847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.586874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.587049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.587075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.587254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.587280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.587451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.587477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.587623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.587649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.587800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.587826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.587998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.588023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.588169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.588197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.588352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.588378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.588548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.588575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.588742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.588768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.588937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.588963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.589123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.589150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.589322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.589348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.589521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.589551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.589700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.589727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.589903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.589929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.590129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.359 [2024-07-25 19:18:27.590156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.359 qpair failed and we were unable to recover it. 00:27:35.359 [2024-07-25 19:18:27.590331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.590357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.590493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.590519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.590691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.590717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.590896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.590922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.591062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.591088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.591237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.591263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.591436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.591462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.591636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.591664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.591840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.591866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.592039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.592065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.592227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.592254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.592422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.592449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.592636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.592662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.592834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.592861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.593065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.593092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.593277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.593303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.593470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.593496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.593663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.593690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.593871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.593897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.594050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:35.360 [2024-07-25 19:18:27.594091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.594124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.594292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.594317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.594468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.594495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.594693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.594719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.594888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.594914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.595063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.595089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.595267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.595292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.595467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.595493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.595635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.595661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.595810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.595837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.596030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.596056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.596267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.596295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.596472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.596498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.596676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.596702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.596841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.596867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.597067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.597093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.597283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.597310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.597456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.360 [2024-07-25 19:18:27.597483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.360 qpair failed and we were unable to recover it. 00:27:35.360 [2024-07-25 19:18:27.597657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.597684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.597829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.597855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.598034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.598060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.598212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.598239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.598382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.598408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.598574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.598601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.598795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.598821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.598962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.598988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.599170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.599198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.599438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.599465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.599615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.599641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.599816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.599844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.599994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.600024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.600209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.600237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.600414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.600440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.600608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.600634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.600774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.600800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.601003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.601029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.601176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.601204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.601345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.601372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.601578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.601605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.601774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.601800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.601972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.601998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.602177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.602205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.602376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.602403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.602611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.602637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.602842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.602868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.603024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.603051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.603192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.603220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.603376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.603403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.603550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.603576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.603732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.603759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.603939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.603965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.604119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.604152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.604303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.604329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.604498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.604523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.604693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.604721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.604902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.361 [2024-07-25 19:18:27.604928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.361 qpair failed and we were unable to recover it. 00:27:35.361 [2024-07-25 19:18:27.605073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.605099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.605405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.605431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.605602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.605628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.605778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.605805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.606008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.606034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.606210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.606237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.606386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.606411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.606551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.606576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.606748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.606774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.606932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.606959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.607132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.607159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.607367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.607393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.607563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.607590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.607786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.607813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.607969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.607999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.608183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.608218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.608434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.608460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.608640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.608667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.608841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.608867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.609045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.609071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.609234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.609261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.609412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.609438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.609634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.609661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.609866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.609892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.610035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.610063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.610240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.610267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.610450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.610476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.610677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.610703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.610859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.610885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.611079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.611111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.611289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.611315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.611467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.611496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.611693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.611720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.611890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.611918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.612090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.612128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.612315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.612341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.612535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.612561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.362 qpair failed and we were unable to recover it. 00:27:35.362 [2024-07-25 19:18:27.612712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.362 [2024-07-25 19:18:27.612738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.612881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.612907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.613081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.613126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.613302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.613328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.613529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.613555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.613707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.613734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.613905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.613932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.614133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.614160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.614330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.614356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.614539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.614565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.614743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.614769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.614971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.614997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.615161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.615188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.615338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.615364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.615536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.615562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.615757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.615784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.615958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.615984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.616159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.616191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.616393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.616420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.616563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.616590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.616750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.616777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.616931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.616957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.617161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.617189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.617385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.617412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.617589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.617616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.617795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.617822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.617974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.618002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.618152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.618180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.618353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.618381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.618555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.618582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.618760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.618787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.618934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.618960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.363 qpair failed and we were unable to recover it. 00:27:35.363 [2024-07-25 19:18:27.619128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.363 [2024-07-25 19:18:27.619154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.619292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.619319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.619499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.619525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.619694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.619720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.619917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.619944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.620123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.620150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.620327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.620355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.620529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.620556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.620755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.620781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.620927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.620953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.621132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.621159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.621310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.621336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.621518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.621544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.621719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.621745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.621893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.621920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.622118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.622145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.622322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.622348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.622495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.622522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.622728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.622754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.622919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.622945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.623086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.623118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.623296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.623322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.623500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.623527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.623670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.623696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.623871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.623896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.624067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.624107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.624267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.624295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.624493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.624519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.624693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.624719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.624865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.624891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.625088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.625120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.625262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.625289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.625503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.625529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.625675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.625702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.625858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.625885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.626033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.626060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.626270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.626297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.626492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.364 [2024-07-25 19:18:27.626519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.364 qpair failed and we were unable to recover it. 00:27:35.364 [2024-07-25 19:18:27.626697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.626723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.626897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.626923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.627076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.627109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.627282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.627307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.627461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.627488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.627655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.627681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.627874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.627900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.628069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.628096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.628288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.628315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.628512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.628538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.628733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.628759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.628930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.628957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.629125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.629153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.629353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.629379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.629522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.629549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.629747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.629774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.629973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.630000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.630163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.630190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.630362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.630389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.630572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.630600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.630796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.630823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.630972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.630998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.631193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.631220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.631401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.631427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.631604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.631632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.631798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.631824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.631993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.632019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.632164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.632196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.632397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.632424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.632594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.632620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.632788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.632816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.632989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.633015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.633172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.633200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.633369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.633397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.633567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.633593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.633735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.633763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.633962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.633988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 19:18:27.634149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.365 [2024-07-25 19:18:27.634177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.634354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.634382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.634551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.634577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.634752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.634780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.634925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.634951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.635147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.635174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.635325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.635353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.635563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.635590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.635733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.635759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.635936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.635962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.636132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.636160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.636337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.636363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.636568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.636594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.636792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.636818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.636991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.637018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.637220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.637248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.637417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.637443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.637584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.637611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.637810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.637836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.637982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.638009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.638174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.638201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.638351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.638378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.638551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.638577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.638744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.638771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.638908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.638935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.639074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.639100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.639252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.639281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.639457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.639483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.639656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.639683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.639831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.639858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.640031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.640061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.640224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.640252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.640388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.640414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.640567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.640593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.640738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.640765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.640937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.640964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.641135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.641162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.641362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.366 [2024-07-25 19:18:27.641388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 19:18:27.641541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.641569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.641745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.641772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.641938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.641964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.642144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.642171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.642350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.642376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.642554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.642580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.642724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.642750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.642891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.642918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.643073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.643099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.643253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.643280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.643449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.643475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.643653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.643679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.643850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.643878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.644027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.644053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.644202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.644231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.644387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.644414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.644591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.644618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.644757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.644783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.644954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.644981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.645182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.645210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.645387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.645414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.645565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.645592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.645786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.645812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.645982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.646008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.646160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.646187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.646334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.646360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.646529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.646555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.646726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.646752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.646950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.646976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.647125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.647153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.647318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.647345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.647540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.647566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.647742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.647769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.647978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.648005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.648174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.648201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.648400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.648427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.648602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.648629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 19:18:27.648784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.367 [2024-07-25 19:18:27.648810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.648976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.649002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.649185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.649213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.649364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.649390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.649531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.649557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.649728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.649754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.649902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.649928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.650135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.650161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.650309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.650336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.650512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.650538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.650688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.650715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.650867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.650893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.651097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.651128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.651268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.651295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.651451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.651477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.651617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.651643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.651799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.651825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.651994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.652021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.652186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.652214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.652448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.652474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.652645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.652672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.652867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.652893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.653069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.653099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.653269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.653295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.653439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.653466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.653612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.653638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.653815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.653842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.654039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.654065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.654222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.654249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.654418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.654444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.654613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.654640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.654811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.654837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.368 [2024-07-25 19:18:27.655012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.368 [2024-07-25 19:18:27.655040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.368 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.655234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.655261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.655433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.655459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.655630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.655656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.655818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.655844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.656013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.656041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.656215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.656243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.656420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.656447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.656625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.656652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.656893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.656920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.657111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.657138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.657317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.657343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.657511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.657539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.657707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.657734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.657909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.657935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.658111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.658138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.658314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.658340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.658521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.658547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.658717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.658744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.658915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.658941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.659091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.659125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.659303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.659329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.659501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.659527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.659703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.659729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.659941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.659967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.660148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.660176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.660325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.660351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.660494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.660520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.660675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.660701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.660970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.660996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.661173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.661205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.661359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.661385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.661550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.661577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.661774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.661800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.661973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.662000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.662157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.662185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.662390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.369 [2024-07-25 19:18:27.662416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.369 qpair failed and we were unable to recover it. 00:27:35.369 [2024-07-25 19:18:27.662589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.662615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.662790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.662817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.662963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.662989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.663143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.663171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.663367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.663392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.663540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.663567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.663744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.663769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.663927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.663953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.664137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.664167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.664316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.664343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.664512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.664538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.664714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.664741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.664882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.664909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.665078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.665109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.665267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.665294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.665490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.665516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.665667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.665694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.665837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.665863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.666013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.666039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.666193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.666219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.666403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.666434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.666635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.666661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.666836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.666862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.667036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.667062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.667264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.667291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.667460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.667486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.667638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.667665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.667834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.667861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.668032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.668058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.668208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.668236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.668409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.668435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.668612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.668638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.668811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.668837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.669016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.669046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.669195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.669222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.669394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.669420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.370 [2024-07-25 19:18:27.669570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.370 [2024-07-25 19:18:27.669597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.370 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.669750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.669777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.669952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.669978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.670169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.670196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.670398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.670425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.670572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.670598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.670746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.670772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.670917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.670943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.671105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.671132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.671329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.671355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.671555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.671582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.671765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.671792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.671970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.671996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.672172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.672200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.672373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.672399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.672552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.672578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.672718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.672744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.672894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.672921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.673126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.673153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.673309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.673335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.673520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.673546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.673721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.673748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.673948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.673974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.674170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.674197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.674401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.674427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.674601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.674627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.674797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.674823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.674992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.675018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.675170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.675196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.675369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.675395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.675542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.675568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.675749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.675775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.675918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.675945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.676143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.676171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.676314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.676340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.676516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.676543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.676713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.371 [2024-07-25 19:18:27.676739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.371 qpair failed and we were unable to recover it. 00:27:35.371 [2024-07-25 19:18:27.676915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.676947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.677134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.677163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.677359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.677385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.677585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.677611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.677787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.677815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.678014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.678040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.678210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.678237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.678437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.678463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.678639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.678664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.678838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.678864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.679040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.679067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.679221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.679248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.679405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.679431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.679606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.679632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.679814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.679841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.680014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.680040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.680184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.680212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.680363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.680390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.680534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.680560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.680706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.680733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.680885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.680911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.681088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.681120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.681266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.681292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.681472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.681498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.681669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.681695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.681865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.681891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.682038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.682063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.682226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.682252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.682399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.682424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.682600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.682626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.682766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.682793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.682966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.682992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.683198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.683225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.683376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.683407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.683573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.683599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.683755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.683781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.372 [2024-07-25 19:18:27.683953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.372 [2024-07-25 19:18:27.683979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.372 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.684125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.684152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.684329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.684355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.684535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.684562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.684706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.684735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.684883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.684909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.685057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.685084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.685259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.685286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.685462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.685487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.685659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.685685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.685860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.685887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.686067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.686094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.686272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.686297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.686443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.686470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.686624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.686650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.686821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.686846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.687017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.687042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.687215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.687242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.687392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.687417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.687561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.687588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.687733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.687760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.687932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.687958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.688100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.688139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.688290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.688316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.688462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.688488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.688658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.688685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.688837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.688863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.689043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.689070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.689235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.689263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.689412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.689439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.689635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.373 [2024-07-25 19:18:27.689660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.373 qpair failed and we were unable to recover it. 00:27:35.373 [2024-07-25 19:18:27.689864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.689890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.690061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.690088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.690272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.690298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.690444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.690470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.690640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.690665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.690861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.690887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.691061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.691087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.691257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.691283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.691446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.691471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.691614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.691640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.691809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.691835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.691980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.692006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.692187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.692214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.692388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.692417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.692593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.692620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.692819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.692845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.693018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.693044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.693226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.693252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.693426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.693451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.693617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.693642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.693791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.693818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.693970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.693995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.694174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.694201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.694376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.694402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.694547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.694574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.694747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.694772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.694939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.694964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.695145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.695172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.695341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.695367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.695530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.695555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.695726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.695752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.695926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.695952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.696096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.696135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.696309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.696336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.696488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.696513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.696680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.696706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.696870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.374 [2024-07-25 19:18:27.696896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.374 qpair failed and we were unable to recover it. 00:27:35.374 [2024-07-25 19:18:27.697063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.697087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.697249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.697274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.697452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.697479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.697679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.697704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.697849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.697874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.698042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.698067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.698214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.698241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.698411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.698437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.698629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.698654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.698812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.698836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.699031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.699057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.699209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.699236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.699408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.699434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.699584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.699609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.699801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.699827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.700032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.700057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.700264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.700296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.700473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.700500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.700670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.700695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.700845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.700872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.701022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.701048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.701226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.701254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.701397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.701422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.701573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.701600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.701801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.701827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.701967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.701992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.702141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.702168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.702344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.702371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.702572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.702598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.702763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.702789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.702945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.702972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.703120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.703147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.703343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.703370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.703541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.703567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.703706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.703732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.703901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.703927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.704106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.704132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.375 [2024-07-25 19:18:27.704281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.375 [2024-07-25 19:18:27.704307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.375 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.704499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.704525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.704727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.704752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.704918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.704944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.705122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.705154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.705308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.705336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.705526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.705552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.705729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.705755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.705907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.705933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.706135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.706162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.706321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.706346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.706524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.706550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.706743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.706769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.706919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.706945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.707114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.707140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.707282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.707307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.707505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.707531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.707705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.707730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.707878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.707903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.708084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.708118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.708270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.708298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.708450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.708476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.708652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.708677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.708816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.708841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.709036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.709061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.709242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.709269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.709414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.709440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.709590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.709614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.709756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.709783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.709961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.709987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.710133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.710170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.710345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.710370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.710375] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:35.376 [2024-07-25 19:18:27.710420] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:35.376 [2024-07-25 19:18:27.710439] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:35.376 [2024-07-25 19:18:27.710452] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:35.376 [2024-07-25 19:18:27.710462] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:35.376 [2024-07-25 19:18:27.710518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.710544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.710709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.376 [2024-07-25 19:18:27.710737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.376 qpair failed and we were unable to recover it. 00:27:35.376 [2024-07-25 19:18:27.710707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:35.376 [2024-07-25 19:18:27.710758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:35.376 [2024-07-25 19:18:27.710805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:35.376 [2024-07-25 19:18:27.710808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:35.376 [2024-07-25 19:18:27.710912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.710937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.711087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.711120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.711269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.711294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.711468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.711494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.711669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.711696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.711844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.711870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.712027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.712052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.712214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.712241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.712392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.712418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.712588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.712614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.712771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.712797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.712974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.713000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.713147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.713175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.713340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.713366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.713516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.713542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.713711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.713737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.713885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.713913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.714056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.714084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.714243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.714270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.714433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.714459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.714599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.714625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.714769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.714795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.714935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.714965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.715123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.715150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.715315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.715341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.715496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.715522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.715700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.715725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.715904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.715930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.716096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.716127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.716285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.716311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.716474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.716500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.716671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.716697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.716842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.716870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.377 qpair failed and we were unable to recover it. 00:27:35.377 [2024-07-25 19:18:27.717041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.377 [2024-07-25 19:18:27.717068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.717239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.717266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.717420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.717446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.717601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.717628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.717777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.717803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.717943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.717969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.718129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.718167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.718309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.718336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.718508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.718534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.718698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.718724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.718865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.718891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.719065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.719092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.719261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.719288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.719466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.719492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.719636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.719662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.719806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.719833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.719985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.720011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.720184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.720211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.720372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.720398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.720542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.720569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.720736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.720762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.720912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.720938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.721144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.721173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.721391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.721418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.721591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.721617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.721754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.721781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.721928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.721954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.722130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.722167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.722318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.722344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.722494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.722524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.722704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.722730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.722872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.722897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.723065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.723091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.723276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.723303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.723453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.723480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.378 [2024-07-25 19:18:27.723619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-25 19:18:27.723645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.378 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.723810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.723835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.723977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.724003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.724177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.724204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.724376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.724402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.724567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.724593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.724733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.724758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.724933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.724958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.725113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.725145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.725391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.725417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.725609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.725635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.725791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.725818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.725993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.726018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.726192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.726220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.726386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.726413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.726575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.726601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.726769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.726794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.726949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.726975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.727126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.727153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.727309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.727335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.727475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.727501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.727648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.727673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.727925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.727951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.728095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.728126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.728305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.728331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.728481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.728506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.728677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.728703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.728847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.728874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.729040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.729065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.729287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.729315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.729467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.729493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.729782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-25 19:18:27.729809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.379 qpair failed and we were unable to recover it. 00:27:35.379 [2024-07-25 19:18:27.729952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.729978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.730154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.730179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.730429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.730459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.730602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.730627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.730799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.730824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.730994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.731021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.731173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.731200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.731378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.731403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.731541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.731567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.731771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.731797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.731960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.731986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.732165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.732192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.732350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.732376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.732523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.732549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.732697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.732724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.732892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.732919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.733067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.733092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.733255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.733282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.733486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.733511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.733691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.733716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.733861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.733887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.734034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.734061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.734264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.734291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.734467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.734493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.734690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.734716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.734887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.734913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.735054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.735080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.735232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.735257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.735408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.735434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.735587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.735613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.735763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.735789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.735940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.735966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.736111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.736138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.736313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.736338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.736488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.736514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.736700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.736726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.736873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.736899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.380 [2024-07-25 19:18:27.737046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-25 19:18:27.737073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.380 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.737261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.737289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.737446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.737472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.737672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.737698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.737865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.737890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.738039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.738068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.738259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.738285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.738439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.738465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.738602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.738627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.738794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.738820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.738990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.739016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.739168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.739194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.739459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.739485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.739628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.739654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.739806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.739832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.739992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.740018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.740219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.740246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.740388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.740413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.740595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.740621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.740787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.740814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.740968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.740993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.741161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.741188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.741357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.741383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.741561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.741587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.741736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.741762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.741905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.741930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.742076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.742108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.742262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.742289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.742475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.742501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.742681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.742707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.742851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.742877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.743028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.743053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.743272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.743299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.743468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.743493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.743663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.743690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.743831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.743856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.744019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.744044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.381 qpair failed and we were unable to recover it. 00:27:35.381 [2024-07-25 19:18:27.744181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.381 [2024-07-25 19:18:27.744208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.744373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.744399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.744548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.744575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.744715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.744741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.744903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.744929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.745079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.745111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.745270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.745296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.745439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.745465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.745631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.745661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.745838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.745863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.746000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.746025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.746167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.746194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.746346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.746372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.746576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.746603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.746776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.746803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.746965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.746991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.747171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.747197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.747365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.747390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.747532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.747558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.747733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.747759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.747903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.747929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.748095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.748128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.748279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.748307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.748487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.748512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.748683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.748708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.748849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.748874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.749029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.749055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.749225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.749252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.749393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.749419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.749561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.749587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.749749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.749774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.382 qpair failed and we were unable to recover it. 00:27:35.382 [2024-07-25 19:18:27.749917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.382 [2024-07-25 19:18:27.749942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.750138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.750165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.750340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.750366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.750557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.750582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.750752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.750777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.750976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.751002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.751153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.751180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.751370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.751396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.751553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.751578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.751729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.751755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.751904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.751929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.752185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.752212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.752372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.752397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.752569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.752595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.752755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.752781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.752926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.752952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.753130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.753157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.753318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.753349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.753625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.753651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.753832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.753858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.754002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.754028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.754177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.754204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.754367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.754392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.754537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.754563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.754735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.754760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.754899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.754924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.755084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.755115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.755267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.755293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.755470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.755495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.755639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.755665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.755838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.755864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.756015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.756041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.756221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.756248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.756406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.756431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.756592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.756619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.756764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.756790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.756946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.383 [2024-07-25 19:18:27.756971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.383 qpair failed and we were unable to recover it. 00:27:35.383 [2024-07-25 19:18:27.757118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.757147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.757303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.757331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.757535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.757561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.757712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.757738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.757913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.757940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.758141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.758169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.758317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.758345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.758495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.758521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.758681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.758707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.758967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.758993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.759173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.759200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.759379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.759403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.759544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.759569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.759724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.759750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.759918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.759943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.760098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.760130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.760280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.760306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.760481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.760507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.760647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.760672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.760826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.760852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.761029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.761060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.761216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.761243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.761394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.761419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.761570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.761597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.761742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.761768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.762040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.762066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.762249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.762275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.762448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.762475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.762625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.762651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.762854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.762880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.763042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.763070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.763251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.763278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.763425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.763453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.763634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.763661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.763810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.763836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.764014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.764041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.764193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.764220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.384 [2024-07-25 19:18:27.764377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.384 [2024-07-25 19:18:27.764403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.384 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.764548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.764573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.764737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.764763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.764910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.764936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.765091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.765128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.765277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.765303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.765446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.765473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.765650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.765676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.765849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.765875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.766039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.766064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.766245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.766272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.766454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.766480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.766625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.766651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.766798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.766823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.766995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.767021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.767173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.767199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.767380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.767405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.767668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.767694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.767863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.767888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.768037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.768063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.768221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.768246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.768397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.768423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.768570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.768596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.768759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.768789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.768946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.768971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.769132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.769160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.769337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.769364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.769519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.769545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.769694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.769720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.769899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.769926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.770069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.770095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.770361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.770387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.770564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.770591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.770768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.770794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.770951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.770977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.771154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.771181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.771359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.771384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.771540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.771565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.385 qpair failed and we were unable to recover it. 00:27:35.385 [2024-07-25 19:18:27.771718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.385 [2024-07-25 19:18:27.771745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.771902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.771928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.772068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.772094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.772254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.772280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.772468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.772494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.772645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.772671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.772839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.772864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.773012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.773039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.773188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.773215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.773394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.773419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.773561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.773586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.773729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.773756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.773900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.773925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.774087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.774121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.774279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.774306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.774454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.774480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.774655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.774680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.774845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.774870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.775019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.775046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.775206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.775233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.775385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.775411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.775586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.775613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.775766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.775792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.775945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.775971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.776148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.776174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.776317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.776349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.776538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.776565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.776739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.776765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.776909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.776934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.777108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.777142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.777306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.777332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.777503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.777528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.777705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.777730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.777919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.777945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.778099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.778139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.778286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.778312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.778487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.778513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.386 [2024-07-25 19:18:27.778652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.386 [2024-07-25 19:18:27.778678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.386 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.778855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.778879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.779032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.779058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.779215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.779241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.779416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.779443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.779611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.779636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.779778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.779803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.779946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.779972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.780151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.780177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.780328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.780353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.780513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.780539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.780684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.780709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.780883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.780907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.781092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.781129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.781274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.781300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.781455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.781483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.781637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.781663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.781823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.781849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.782022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.782048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.782202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.782229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.782376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.782402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.782603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.782630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.782771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.782796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.782941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.782966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.783130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.783157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.783328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.783353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.783585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.783610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.783749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.783774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.783943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.783973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.784120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.784146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.784321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.784346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.387 [2024-07-25 19:18:27.784495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.387 [2024-07-25 19:18:27.784521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.387 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.784674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.784701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.784968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.784996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.785173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.785201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.785339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.785365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.785509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.785537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.785710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.785735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.785874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.785900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.786075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.786106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.786257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.786282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.786432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.786457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.786628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.786654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.786802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.786828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.786995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.787020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.787197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.787223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.787382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.787408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.787546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.787571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.787747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.787772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.787924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.787952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.788097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.788130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.788318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.788344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.788502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.788527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.788665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.788691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.788838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.788863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.789016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.789043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.789190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.789217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.789396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.789423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.789581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.789607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.789750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.789776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.789953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.789979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.790136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.790163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.790313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.790339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.790518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.790544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.790698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.790724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.790987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.791013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.791167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.791193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.791340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.791367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.791539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.388 [2024-07-25 19:18:27.791571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.388 qpair failed and we were unable to recover it. 00:27:35.388 [2024-07-25 19:18:27.791743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.389 [2024-07-25 19:18:27.791769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.389 qpair failed and we were unable to recover it. 00:27:35.389 [2024-07-25 19:18:27.791916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.389 [2024-07-25 19:18:27.791941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.389 qpair failed and we were unable to recover it. 00:27:35.389 [2024-07-25 19:18:27.792112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.389 [2024-07-25 19:18:27.792138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.389 qpair failed and we were unable to recover it. 00:27:35.389 [2024-07-25 19:18:27.792283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.389 [2024-07-25 19:18:27.792310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.389 qpair failed and we were unable to recover it. 00:27:35.389 [2024-07-25 19:18:27.792486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.389 [2024-07-25 19:18:27.792512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.389 qpair failed and we were unable to recover it. 00:27:35.389 [2024-07-25 19:18:27.792677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.389 [2024-07-25 19:18:27.792702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.389 qpair failed and we were unable to recover it. 00:27:35.389 [2024-07-25 19:18:27.792838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.389 [2024-07-25 19:18:27.792864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.389 qpair failed and we were unable to recover it. 00:27:35.389 [2024-07-25 19:18:27.793039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.389 [2024-07-25 19:18:27.793065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.389 qpair failed and we were unable to recover it. 00:27:35.389 [2024-07-25 19:18:27.793235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.389 [2024-07-25 19:18:27.793261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.389 qpair failed and we were unable to recover it. 00:27:35.673 [2024-07-25 19:18:27.793423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.673 [2024-07-25 19:18:27.793448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.673 qpair failed and we were unable to recover it. 00:27:35.673 [2024-07-25 19:18:27.793602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.673 [2024-07-25 19:18:27.793628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.673 qpair failed and we were unable to recover it. 00:27:35.673 [2024-07-25 19:18:27.793782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.673 [2024-07-25 19:18:27.793808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.673 qpair failed and we were unable to recover it. 00:27:35.673 [2024-07-25 19:18:27.793958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.673 [2024-07-25 19:18:27.793985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.673 qpair failed and we were unable to recover it. 00:27:35.673 [2024-07-25 19:18:27.794157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.673 [2024-07-25 19:18:27.794184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.673 qpair failed and we were unable to recover it. 00:27:35.673 [2024-07-25 19:18:27.794336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.673 [2024-07-25 19:18:27.794364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.673 qpair failed and we were unable to recover it. 00:27:35.673 [2024-07-25 19:18:27.794520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.673 [2024-07-25 19:18:27.794545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.673 qpair failed and we were unable to recover it. 00:27:35.673 [2024-07-25 19:18:27.794687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.673 [2024-07-25 19:18:27.794713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.673 qpair failed and we were unable to recover it. 00:27:35.673 [2024-07-25 19:18:27.794870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.673 [2024-07-25 19:18:27.794896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.673 qpair failed and we were unable to recover it. 00:27:35.673 [2024-07-25 19:18:27.795051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.673 [2024-07-25 19:18:27.795078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.673 qpair failed and we were unable to recover it. 00:27:35.673 [2024-07-25 19:18:27.795248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.673 [2024-07-25 19:18:27.795275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.673 qpair failed and we were unable to recover it. 00:27:35.673 [2024-07-25 19:18:27.795428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.673 [2024-07-25 19:18:27.795456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.673 qpair failed and we were unable to recover it. 00:27:35.673 [2024-07-25 19:18:27.795633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.673 [2024-07-25 19:18:27.795659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.673 qpair failed and we were unable to recover it. 00:27:35.673 [2024-07-25 19:18:27.795803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.673 [2024-07-25 19:18:27.795828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.673 qpair failed and we were unable to recover it. 00:27:35.673 [2024-07-25 19:18:27.795990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.673 [2024-07-25 19:18:27.796015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.673 qpair failed and we were unable to recover it. 00:27:35.673 [2024-07-25 19:18:27.796188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.673 [2024-07-25 19:18:27.796215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.673 qpair failed and we were unable to recover it. 00:27:35.673 [2024-07-25 19:18:27.796377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.673 [2024-07-25 19:18:27.796402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.673 qpair failed and we were unable to recover it. 00:27:35.673 [2024-07-25 19:18:27.796580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.673 [2024-07-25 19:18:27.796606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.673 qpair failed and we were unable to recover it. 00:27:35.673 [2024-07-25 19:18:27.796750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.673 [2024-07-25 19:18:27.796776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.673 qpair failed and we were unable to recover it. 00:27:35.673 [2024-07-25 19:18:27.796921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.673 [2024-07-25 19:18:27.796946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.797093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.797131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.797295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.797321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.797494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.797520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.797694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.797720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.797868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.797894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.798031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.798056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.798213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.798241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.798394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.798420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.798575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.798601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.798746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.798773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.798945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.798975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.799136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.799162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.799349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.799375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.799521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.799548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.799697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.799723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.799994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.800020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.800157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.800183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.800354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.800380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.800545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.800571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.800754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.800780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.800922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.800948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.801208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.801236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.801392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.801418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.801569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.801597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.801783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.801808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.801951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.801976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.802157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.802184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.802371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.802397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.802539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.802565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.802733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.802758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.802901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.802928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.803112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.803138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.803312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.803338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.803515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.803540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.803710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.803736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.803889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.674 [2024-07-25 19:18:27.803917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.674 qpair failed and we were unable to recover it. 00:27:35.674 [2024-07-25 19:18:27.804094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.804126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.804270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.804299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.804447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.804473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.804644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.804670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.804820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.804845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.804985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.805011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.805159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.805188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.805326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.805352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.805495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.805521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.805674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.805700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.805873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.805900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.806044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.806070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.806283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.806309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.806450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.806477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.806646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.806672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.806823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.806849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.806997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.807023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.807164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.807190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.807342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.807369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.807541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.807567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.807711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.807737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.807885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.807910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.808056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.808083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.808243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.808270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.808436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.808461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.808631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.808657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.808794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.808820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.808974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.808999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.809160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.809188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.809359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.809386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.809551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.809577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.809726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.809753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.809897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.809923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.810069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.810095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.810272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.810298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.810443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.675 [2024-07-25 19:18:27.810469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.675 qpair failed and we were unable to recover it. 00:27:35.675 [2024-07-25 19:18:27.810644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.810670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.810842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.810867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.811034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.811060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.811219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.811245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.811402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.811428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.811611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.811641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.811786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.811813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.811961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.811988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.812137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.812164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.812304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.812330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.812480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.812505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.812653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.812680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.812836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.812862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.813004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.813030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.813194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.813222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.813386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.813412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.813578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.813603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.813752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.813779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.813947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.813974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.814137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.814164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.814343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.814369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.814554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.814579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.814735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.814761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.814909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.814936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.815080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.815111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.815268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.815294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.815460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.815486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.815659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.815684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.815824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.815851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.815997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.816023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.816169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.816213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.816360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.816385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.816528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.816553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.816728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.816753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.816895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.676 [2024-07-25 19:18:27.816921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.676 qpair failed and we were unable to recover it. 00:27:35.676 [2024-07-25 19:18:27.817087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.817125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.817301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.817327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.817467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.817492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.817635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.817661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.817831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.817855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.818055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.818081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.818259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.818284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.818432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.818458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.818626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.818650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.818790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.818815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.818960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.818993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.819155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.819183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.819365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.819391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.819535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.819562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.819760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.819786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.819922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.819947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.820111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.820137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.820280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.820306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.820484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.820509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.820673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.820698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.820874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.820900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.821037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.821064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.821220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.821246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.821393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.821418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.821586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.821612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.821776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.821802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.821995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.822020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.822165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.822191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.822366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.822392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.822664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.822690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.822866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.822892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.823053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.823079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.823265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.823291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.823462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.823488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.823633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.677 [2024-07-25 19:18:27.823659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.677 qpair failed and we were unable to recover it. 00:27:35.677 [2024-07-25 19:18:27.823805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.823830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.823978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.824003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.824180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.824207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.824383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.824409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.824558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.824584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.824731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.824757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.824930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.824956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.825107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.825138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.825292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.825317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.825492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.825517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.825663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.825689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.825843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.825868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.826029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.826053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.826231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.826259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.826403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.826430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.826600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.826629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.826787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.826813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.826963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.826989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.827160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.827186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.827331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.827357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.827497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.827523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.827695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.827720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.827896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.827922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.828064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.828089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.828237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.828263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.828422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.828447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.828617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.828642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.828790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.828816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.678 qpair failed and we were unable to recover it. 00:27:35.678 [2024-07-25 19:18:27.828965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.678 [2024-07-25 19:18:27.828990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.829148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.829175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.829351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.829376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.829543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.829569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.829710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.829737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.829884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.829909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.830083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.830114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.830256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.830283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.830443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.830469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.830642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.830668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.830816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.830843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.831017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.831043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.831193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.831219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.831381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.831406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.831573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.831598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.831767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.831793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.831931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.831957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.832130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.832157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.832302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.832328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.832480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.832505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.832651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.832676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.832820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.832846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.833016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.833042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.833231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.833258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.833405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.833431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.833606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.833633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.833780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.833805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.833943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.833972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.834150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.834177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.834339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.834364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.834505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.834530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.834703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.834727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.834872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.834898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.835071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.835097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.835274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.835300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.679 [2024-07-25 19:18:27.835448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.679 [2024-07-25 19:18:27.835473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.679 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.835621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.835647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.835799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.835825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.836001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.836026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.836167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.836194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.836338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.836363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.836562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.836587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.836727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.836752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.836921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.836948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.837135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.837163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.837340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.837365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.837507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.837534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.837700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.837726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.837883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.837908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.838061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.838086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.838259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.838285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.838458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.838484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.838648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.838673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.838818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.838843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.839020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.839047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.839229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.839255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.839429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.839454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.839626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.839651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.839789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.839815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.839988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.840014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.840188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.840215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.840380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.840406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.840581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.840606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.840767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.840793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.840942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.840967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.841141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.841168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.841316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.841343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.841494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.841524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.841668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.841694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.680 [2024-07-25 19:18:27.841844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.680 [2024-07-25 19:18:27.841869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.680 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.842034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.842060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.842211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.842238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.842411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.842436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.842584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.842610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.842746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.842772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.842915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.842940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.843113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.843140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.843297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.843323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.843487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.843512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.843661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.843688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.843849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.843876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.844054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.844079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.844226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.844252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.844410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.844435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.844575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.844600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.844804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.844829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.844987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.845013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.845180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.845208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.845355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.845380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.845518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.845544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.845713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.845739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.845886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.845912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.846062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.846088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.846233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.846259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.846411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.846438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.846580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.846605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.846758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.846785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.846929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.846954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.847127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.847153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.847298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.847324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.847487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.847513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.847690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.847715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.847890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.847916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.848059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.681 [2024-07-25 19:18:27.848085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.681 qpair failed and we were unable to recover it. 00:27:35.681 [2024-07-25 19:18:27.848263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.848289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.848429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.848454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.848629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.848655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.848796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.848826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.848991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.849016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.849165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.849192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.849379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.849405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.849553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.849581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.849730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.849756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.849901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.849926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.850097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.850129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.850271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.850297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.850441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.850466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.850635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.850662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.850805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.850830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.850997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.851022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.851171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.851198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.851351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.851378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.851523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.851550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.851698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.851724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.851874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.851901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.852079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.852110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.852260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.852286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.852443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.852468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.852638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.852664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.852830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.852856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.852997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.853022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.853163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.853190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.853350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.853376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.853547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.853572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.853747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.853772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.853955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.853980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.854150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.854177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.854354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.854380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.854518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.854543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.682 [2024-07-25 19:18:27.854699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.682 [2024-07-25 19:18:27.854724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.682 qpair failed and we were unable to recover it. 00:27:35.683 [2024-07-25 19:18:27.854896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.683 [2024-07-25 19:18:27.854922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.683 qpair failed and we were unable to recover it. 00:27:35.683 [2024-07-25 19:18:27.855126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.683 [2024-07-25 19:18:27.855152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.683 qpair failed and we were unable to recover it. 00:27:35.683 [2024-07-25 19:18:27.855290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.683 [2024-07-25 19:18:27.855315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.683 qpair failed and we were unable to recover it. 00:27:35.683 [2024-07-25 19:18:27.855462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.683 [2024-07-25 19:18:27.855487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.683 qpair failed and we were unable to recover it. 00:27:35.683 [2024-07-25 19:18:27.855626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.683 [2024-07-25 19:18:27.855652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.683 qpair failed and we were unable to recover it. 00:27:35.683 [2024-07-25 19:18:27.855851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.683 [2024-07-25 19:18:27.855876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.683 qpair failed and we were unable to recover it. 00:27:35.683 [2024-07-25 19:18:27.856019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.683 [2024-07-25 19:18:27.856044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.683 qpair failed and we were unable to recover it. 00:27:35.683 [2024-07-25 19:18:27.856211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.683 [2024-07-25 19:18:27.856243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.683 qpair failed and we were unable to recover it. 00:27:35.683 [2024-07-25 19:18:27.856412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.683 [2024-07-25 19:18:27.856437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.683 qpair failed and we were unable to recover it. 00:27:35.683 [2024-07-25 19:18:27.856611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.683 [2024-07-25 19:18:27.856636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.683 qpair failed and we were unable to recover it. 00:27:35.683 [2024-07-25 19:18:27.856832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.683 [2024-07-25 19:18:27.856858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.683 qpair failed and we were unable to recover it. 00:27:35.683 [2024-07-25 19:18:27.857032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.683 [2024-07-25 19:18:27.857058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.683 qpair failed and we were unable to recover it. 00:27:35.683 [2024-07-25 19:18:27.857221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.683 [2024-07-25 19:18:27.857249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.683 qpair failed and we were unable to recover it. 00:27:35.683 [2024-07-25 19:18:27.857424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.683 [2024-07-25 19:18:27.857450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.683 qpair failed and we were unable to recover it. 00:27:35.683 [2024-07-25 19:18:27.857603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.683 [2024-07-25 19:18:27.857628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.683 qpair failed and we were unable to recover it. 00:27:35.683 [2024-07-25 19:18:27.857784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.683 [2024-07-25 19:18:27.857810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.683 qpair failed and we were unable to recover it. 00:27:35.683 [2024-07-25 19:18:27.857950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.683 [2024-07-25 19:18:27.857977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.683 qpair failed and we were unable to recover it. 00:27:35.683 [2024-07-25 19:18:27.858134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.683 [2024-07-25 19:18:27.858160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.683 qpair failed and we were unable to recover it. 00:27:35.683 [2024-07-25 19:18:27.858330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.683 [2024-07-25 19:18:27.858355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.683 qpair failed and we were unable to recover it. 00:27:35.683 [2024-07-25 19:18:27.858523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.683 [2024-07-25 19:18:27.858548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.683 qpair failed and we were unable to recover it. 00:27:35.683 [2024-07-25 19:18:27.858718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.683 [2024-07-25 19:18:27.858744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.683 qpair failed and we were unable to recover it. 00:27:35.683 [2024-07-25 19:18:27.858895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.683 [2024-07-25 19:18:27.858920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.683 qpair failed and we were unable to recover it. 00:27:35.683 [2024-07-25 19:18:27.859068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.683 [2024-07-25 19:18:27.859094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.683 qpair failed and we were unable to recover it. 00:27:35.683 [2024-07-25 19:18:27.859273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.683 [2024-07-25 19:18:27.859299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.683 qpair failed and we were unable to recover it. 00:27:35.683 [2024-07-25 19:18:27.859445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.859470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.859648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.859675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.859816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.859843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.859980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.860005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.860170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.860196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.860348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.860374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.860535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.860560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.860698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.860723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.860883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.860909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.861087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.861124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.861283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.861310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.861452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.861478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.861631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.861657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.861836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.861863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.862004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.862029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.862188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.862215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.862359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.862386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.862533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.862559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.862697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.862723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.862898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.862924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.863095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.863140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.863310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.863335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.863469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.863493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.863636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.863666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.863845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.863870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.864014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.864039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.864211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.864237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.864423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.864449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.864617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.864642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.864785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.864810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.864991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.865017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.865160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.865186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.865331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.865356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.865498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.865524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.865699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.865725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.865868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.865893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.684 qpair failed and we were unable to recover it. 00:27:35.684 [2024-07-25 19:18:27.866034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.684 [2024-07-25 19:18:27.866059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.866235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.866263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.866431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.866456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.866596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.866621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.866754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.866780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.866922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.866946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.867115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.867141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.867280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.867305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.867450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.867477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.867619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.867645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.867815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.867840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.868001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.868026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.868169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.868195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.868366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.868391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.868573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.868599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.868751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.868777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.868954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.868981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.869128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.869154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.869299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.869324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.869475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.869501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.869672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.869697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.869898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.869923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.870092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.870128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.870278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.870304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.870474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.870499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.870662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.870687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.870829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.870855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.871021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.871051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.871193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.871219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.871361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.871386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.871536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.871563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.871737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.871762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.871930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.871955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.872127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.872153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.872299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.872326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.685 [2024-07-25 19:18:27.872484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.685 [2024-07-25 19:18:27.872509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.685 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.872671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.872696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.872845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.872871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.873019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.873044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.873216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.873243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.873392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.873417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.873596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.873622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.873798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.873824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.873988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.874014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.874164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.874191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.874342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.874369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.874538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.874563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.874702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.874727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.874892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.874918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.875068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.875093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.875288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.875314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.875481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.875507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.875693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.875719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.875868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.875893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.876065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.876090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.876278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.876304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.876442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.876467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.876641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.876666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.876839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.876864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.877037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.877062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.877206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.877232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.877374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.877400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.877536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.877561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.877725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.877750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.877919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.877944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.878090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.878127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.878293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.878319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.878476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.878507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.878651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.878677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.878848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.878875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.879045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.879071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.686 [2024-07-25 19:18:27.879227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.686 [2024-07-25 19:18:27.879254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.686 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.879399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.879424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.879586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.879612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.879752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.879778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.879922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.879947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.880094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.880127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.880328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.880354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.880526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.880551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.880696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.880722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.880866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.880892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.881064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.881089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.881268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.881294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.881455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.881481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.881648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.881673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.881830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.881856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.882026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.882052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.882196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.882224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.882367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.882393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.882539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.882566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.882762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.882788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.882927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.882953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.883123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.883150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.883306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.883333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.883479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.883505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.883652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.883678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.883844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.883870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.884034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.884060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.884204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.884230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.884399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.884425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.884584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.884610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.884773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.884799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.884948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.884975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.885142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.885168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.885322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.885348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.885491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.885518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.885715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.885741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.885912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.885942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.687 [2024-07-25 19:18:27.886090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.687 [2024-07-25 19:18:27.886129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.687 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.886285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.886312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.886469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.886496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.886636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.886663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.886809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.886834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.886997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.887023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.887210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.887237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.887419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.887445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.887583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.887609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.887772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.887798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.887971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.887997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.888169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.888196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.888336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.888362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.888527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.888553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.888730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.888756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.888897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.888923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.889085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.889116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.889291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.889318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.889461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.889487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.889661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.889687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.889836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.889861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.890019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.890045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.890222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.890250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.890395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.890422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.890587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.890613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.890780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.890806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.890947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.890973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.891150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.688 [2024-07-25 19:18:27.891177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.688 qpair failed and we were unable to recover it. 00:27:35.688 [2024-07-25 19:18:27.891342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.891368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.891513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.891539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.891706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.891732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.891901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.891927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.892096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.892127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.892275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.892301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.892439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.892465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.892605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.892632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.892778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.892803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.892974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.893000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.893151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.893178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.893341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.893372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.893513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.893537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.893691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.893717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.893891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.893915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.894064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.894090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.894252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.894279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.894476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.894501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.894642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.894667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.894830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.894855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.894997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.895022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.895194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.895220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.895365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.895391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.895562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.895589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.895743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.895768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.895949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.895974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.896123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.896149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.896339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.896364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.896508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.896533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.896681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.896707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.896864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.896889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.897058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.897083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.897238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.897265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.897436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.897461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.897599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.897624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.897774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.897799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.897943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.689 [2024-07-25 19:18:27.897970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.689 qpair failed and we were unable to recover it. 00:27:35.689 [2024-07-25 19:18:27.898142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.898168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.898335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.898364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.898512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.898539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.898688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.898714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.898850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.898875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.899047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.899073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.899259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.899286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.899452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.899477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.899645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.899671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.899825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.899851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.900019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.900045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.900193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.900219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.900365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.900390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.900561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.900587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.900741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.900767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.900919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.900945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.901109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.901140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.901309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.901334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.901473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.901498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.901640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.901665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.901817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.901844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.901994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.902019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.902189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.902214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.902393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.902419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.902598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.902624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.902766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.902791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.902951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.902976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.903135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.903162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.903361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.903386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.903556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.903583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.903752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.903778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.903956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.903982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.904148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.904175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.904343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.904369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.904512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.904539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.690 [2024-07-25 19:18:27.904717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.690 [2024-07-25 19:18:27.904744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.690 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.904888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.904914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.905082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.905117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.905272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.905298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.905444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.905470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.905650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.905676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.905826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.905855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.906000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.906026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.906191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.906219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.906399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.906425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.906568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.906593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.906741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.906767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.906909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.906936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.907077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.907112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.907296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.907322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.907474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.907501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.907661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.907686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.907858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.907883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.908029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.908056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.908265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.908292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.908442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.908469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.908616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.908642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.908821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.908847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.908996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.909021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.909207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.909233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.909390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.909415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.909583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.909609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.909754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.909781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.909923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.909948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.910118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.910145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.910311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.910337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.910507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.910532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.910672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.910697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.910854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.910879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.691 qpair failed and we were unable to recover it. 00:27:35.691 [2024-07-25 19:18:27.911045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.691 [2024-07-25 19:18:27.911071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.911259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.911285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.911430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.911455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.911631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.911657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.911823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.911848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.912015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.912039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.912178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.912204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.912409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.912435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.912582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.912608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.912777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.912802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.912948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.912974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.913145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.913172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.913318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.913348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.913487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.913513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.913653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.913679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.913855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.913882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.914026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.914052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.914205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.914230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.914373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.914398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.914577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.914603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.914745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.914771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.914917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.914942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.915119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.915148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.915295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.915321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.915469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.915495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.915664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.915690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.915837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.915862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.916010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.916035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.916189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.916215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.916387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.916412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.916559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.916583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.916759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.916784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.916959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.916985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.917139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.917166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.692 [2024-07-25 19:18:27.917307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.692 [2024-07-25 19:18:27.917333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.692 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.917478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.917504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.917661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.917687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.917852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.917877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.918042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.918066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.918259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.918286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.918457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.918484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.918629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.918654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.918798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.918823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.918993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.919019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.919191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.919217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.919368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.919393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.919592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.919618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.919770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.919796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.919935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.919960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.920112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.920137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.920299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.920326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.920471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.920497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.920639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.920668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.920840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.920866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.921010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.921036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.921188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.921214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.921399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.921425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.921586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.921611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.921772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.921798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.921950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.921975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.922152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.922180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.922339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.922364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.922524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.922550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.922697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.922722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.693 [2024-07-25 19:18:27.922890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.693 [2024-07-25 19:18:27.922915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.693 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.923070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.923095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.923247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.923273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.923443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.923469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.923637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.923662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.923835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.923861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.924006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.924032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.924204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.924230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.924381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.924406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.924544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.924569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.924715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.924741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.924892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.924918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.925056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.925083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.925247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.925274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.925419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.925445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.925642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.925668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.925835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.925861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.926005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.926032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.926201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.926229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.926378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.926405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.926578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.926604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.926793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.926819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.926960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.926987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.927176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.927202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.927369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.927395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.927541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.927567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.927724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.927750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.927895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.927921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.928064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.928094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.928243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.928270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.928414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.928441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.928606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.928633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.928800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.928826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.928993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.929019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.929162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.929189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.929336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.929364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.694 qpair failed and we were unable to recover it. 00:27:35.694 [2024-07-25 19:18:27.929536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.694 [2024-07-25 19:18:27.929562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.929710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.929737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.929884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.929909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.930077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.930112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.930319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.930345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.930486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.930512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.930686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.930712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.930912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.930937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.931084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.931119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.931271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.931297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.931457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.931483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.931637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.931662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.931847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.931873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.932010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.932035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.932200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.932226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.932374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.932400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.932562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.932587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.932756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.932781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.932953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.932979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.933134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.933161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.933328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.933354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.933500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.933529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.933704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.933730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.933933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.933959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.934113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.934140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.934300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.934326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.934499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.934525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.934699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.934726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.934894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.934920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.935076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.935108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.935259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.935285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.695 qpair failed and we were unable to recover it. 00:27:35.695 [2024-07-25 19:18:27.935449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.695 [2024-07-25 19:18:27.935474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.935625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.935655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.935813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.935838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.936004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.936030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.936185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.936212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.936384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.936411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.936554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.936579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.936745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.936771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.936955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.936981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.937124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.937151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.937323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.937349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.937484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.937509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.937690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.937716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.937865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.937891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.938031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.938056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.938235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.938262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.938411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.938437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.938576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.938603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.938781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.938806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.938953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.938979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.939164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.939192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.939339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.939365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.939531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.939556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.939710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.939736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.939879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.939907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.940050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.940076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.940256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.940283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.940428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.940454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.940610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.940637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.940810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.940836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.940984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.941011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.941177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.941203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.941351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.941377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.941574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.941600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.941753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.941778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.941950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.941975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.942127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.696 [2024-07-25 19:18:27.942153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.696 qpair failed and we were unable to recover it. 00:27:35.696 [2024-07-25 19:18:27.942297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.942323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.942465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.942491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.942640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.942665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.942828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.942855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.943014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.943045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.943200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.943226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.943392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.943418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.943567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.943593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.943773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.943799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.943972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.943998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.944141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.944168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.944309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.944334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.944490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.944516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.944675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.944700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.944847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.944874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.945037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.945063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.945240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.945267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.945434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.945459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.945612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.945638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.945778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.945804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.945974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.945999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.946145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.946172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.946309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.946335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.946516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.946542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.946693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.946718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.946857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.946882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.947053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.947078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.947254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.947281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.947450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.947475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.947617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.947642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.947817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.947843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.947998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.948023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.948183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.948208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.948384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.948410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.948579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.948605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.948797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.948823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.948966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.697 [2024-07-25 19:18:27.948991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.697 qpair failed and we were unable to recover it. 00:27:35.697 [2024-07-25 19:18:27.949144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.949170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.949314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.949340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.949485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.949510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.949657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.949682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.949862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.949887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.950030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.950055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.950215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.950241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.950389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.950420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.950581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.950608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.950764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.950790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.950965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.950990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.951151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.951178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.951361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.951387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.951559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.951585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.951754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.951779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.951922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.951947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.952093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.952135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.952276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.952301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.952474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.952498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.952666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.952692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.952833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.952860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.953018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.953043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.953189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.953215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.953358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.953384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.953552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.953578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.953718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.953744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.953908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.698 [2024-07-25 19:18:27.953934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.698 qpair failed and we were unable to recover it. 00:27:35.698 [2024-07-25 19:18:27.954097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.954127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.954273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.954299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.954440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.954466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.954630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.954655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.954791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.954816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.954985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.955011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.955158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.955185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.955394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.955438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.955604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.955632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.955785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.955814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.955956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.955982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.956145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.956174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.956315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.956341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.956490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.956517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.956686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.956712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.956868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.956894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.957054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.957080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.957249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.957275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.957453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.957480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.957624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.957650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.957807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.957838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.958016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.958041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.958188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.958214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.958381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.958406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.958566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.958593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.958761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.958787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.958956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.958981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.959140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.959167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.959364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.959389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.959551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.959578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.959750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.959776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.959928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.959953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.960108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.960136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.960310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.960336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.699 [2024-07-25 19:18:27.960513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.699 [2024-07-25 19:18:27.960539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.699 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.960708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.960733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.960883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.960908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.961063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.961088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.961239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.961266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.961428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.961455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.961597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.961623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.961802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.961828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.961972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.961998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.962171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.962198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.962375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.962401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.962558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.962584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.962732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.962758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.962941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.962967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.963135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.963161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.963305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.963332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.963479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.963505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.963654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.963680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.963821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.963847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.964026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.964051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.964196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.964223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.964396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.964422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.964594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.964620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.964760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.964788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.964954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.964979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.965149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.965175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.965347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.965378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.965525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.965550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.965691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.965717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.965856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.965882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.966019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.966044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.966207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.966233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.966390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.700 [2024-07-25 19:18:27.966415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.700 qpair failed and we were unable to recover it. 00:27:35.700 [2024-07-25 19:18:27.966564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.966589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.966733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.966759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.966904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.966930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.967067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.967093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.967273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.967299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.967464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.967490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.967699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.967725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.967871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.967896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.968042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.968069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.968237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.968264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.968433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.968459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.968627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.968652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.968792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.968818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.968989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.969015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.969166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.969193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.969351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.969377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.969551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.969577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.969754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.969779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.969927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.969953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.970113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.970139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.970314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.970354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.970513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.970540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.970690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.970717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.970867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.970892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.971030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.971056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.971219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.971248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.971402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.971427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.971582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.971607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.971742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.971768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.971935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.971960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.972132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.972158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.972298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.972325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.972475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.972500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.972635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.972665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.972815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.972842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.972995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.973020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.701 [2024-07-25 19:18:27.973192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.701 [2024-07-25 19:18:27.973218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.701 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.973368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.973394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.973546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.973571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.973713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.973738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.973889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.973917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.974068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.974095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.974259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.974285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.974441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.974467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.974635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.974661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.974809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.974834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.974982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.975007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.975155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.975182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.975326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.975351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.975493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.975518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.975667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.975693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.975840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.975866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.976026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.976052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.976207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.976235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.976382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.976408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.976542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.976568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.976705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.976730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.976868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.976893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.977042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.977067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.977211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.977238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.977430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.977471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.977626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.977653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.977791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.977818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.977984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.978010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.978167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.978194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.978341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.978367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.978535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.978561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.978707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.978733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.978909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.978935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.979075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.979110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.979310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.702 [2024-07-25 19:18:27.979336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.702 qpair failed and we were unable to recover it. 00:27:35.702 [2024-07-25 19:18:27.979475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.979500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.979658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.979684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.979832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.979865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.980014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.980041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.980187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.980215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.980363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.980389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.980547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.980574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.980752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.980778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.980941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.980966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.981110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.981136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.981304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.981330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.981474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.981499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.981644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.981669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.981855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.981881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.982021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.982046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.982228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.982254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.982414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.982441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.982593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.982619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.982778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.982804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.982947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.982973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.983124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.983151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.983290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.983316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.983477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.983502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.983642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.983668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.983845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.983872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.984024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.984050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.984194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.984221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.984418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.984444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.984598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.984624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.984776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.703 [2024-07-25 19:18:27.984802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.703 qpair failed and we were unable to recover it. 00:27:35.703 [2024-07-25 19:18:27.984938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.984964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.985131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.985157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.985294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.985320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.985471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.985498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.985665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.985691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.985826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.985852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.986021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.986047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.986209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.986235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.986397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.986423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.986579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.986605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.986746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.986772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.986947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.986974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.987115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.987146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.987315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.987341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.987494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.987519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.987657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.987682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.987852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.987877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.988043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.988069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.988247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.988274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.988413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.988439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.988606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.988631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.988778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.988804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.988972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.988998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.989164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.989192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.989355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.989382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.989516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.989542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.989699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.989725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.989896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.989922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.990063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.990089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.990265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.990291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.990474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.990500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.990676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.704 [2024-07-25 19:18:27.990701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.704 qpair failed and we were unable to recover it. 00:27:35.704 [2024-07-25 19:18:27.990842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.990868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.991008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.991034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.991209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.991236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.991380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.991405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.991574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.991600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.991761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.991787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.991939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.991966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.992113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.992140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.992309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.992334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.992478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.992504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.992643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.992669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.992836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.992862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.993039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.993065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.993225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.993251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.993420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.993446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.993585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.993610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.993790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.993815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.993954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.993979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.994124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.994151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.994315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.994341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.994500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.994527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.994677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.994702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.994852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.994878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.995024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.995051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.995197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.995224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.995364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.995390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.995529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.995555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.995702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.995729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.995868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.995895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.996048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.996074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.996243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.996269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.996408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.996435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.996604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.996630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.996779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.996805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.996955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.996981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.997126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.997152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.997340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.997366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.705 qpair failed and we were unable to recover it. 00:27:35.705 [2024-07-25 19:18:27.997513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.705 [2024-07-25 19:18:27.997540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:27.997687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:27.997714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:27.997881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:27.997907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:27.998107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:27.998133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:27.998294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:27.998320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:27.998471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:27.998497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:27.998662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:27.998687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:27.998831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:27.998857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:27.998998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:27.999024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:27.999191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:27.999218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:27.999360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:27.999390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:27.999539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:27.999567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:27.999707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:27.999732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:27.999870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:27.999896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:28.000050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:28.000076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:28.000224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:28.000250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:28.000423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:28.000450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:28.000591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:28.000617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:28.000764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:28.000790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:28.000930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:28.000956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:28.001095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:28.001126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:28.001312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:28.001338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:28.001506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:28.001532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:28.001682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:28.001708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:28.001854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:28.001880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:28.002046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:28.002072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:28.002217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:28.002243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:28.002413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:28.002439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:28.002579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:28.002605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:28.002781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:28.002807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:28.002980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:28.003006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:28.003165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:28.003191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:28.003362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:28.003388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:28.003552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:28.003579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:28.003721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:28.003747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:28.003885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:28.003910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:28.004054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:28.004080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.706 qpair failed and we were unable to recover it. 00:27:35.706 [2024-07-25 19:18:28.004261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.706 [2024-07-25 19:18:28.004288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.004486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.004512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.004688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.004715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.004856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.004883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.005043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.005069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.005218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.005244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.005406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.005433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.005595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.005621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.005787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.005812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.005976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.006002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.006172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.006198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.006369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.006394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.006567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.006593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.006740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.006771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.006919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.006945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.007115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.007141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.007312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.007337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.007480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.007508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.007695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.007721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.007869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.007896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.008052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.008078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.008284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.008310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.008483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.008509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.008657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.008684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.008850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.008876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.009028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.009055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.009259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.009285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.009459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.009485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.009650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.009675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.009843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.009869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.010033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.010058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.010217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.010243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.010382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.010409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.010562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.707 [2024-07-25 19:18:28.010589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:35.707 qpair failed and we were unable to recover it. 00:27:35.707 [2024-07-25 19:18:28.010773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.010815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.011011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.011039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.011206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.011235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.011413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.011439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.011581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.011605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.011776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.011802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.011957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.011983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.012150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.012176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.012320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.012345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.012489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.012515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.012687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.012713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.012854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.012879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.013044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.013070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.013216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.013241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.013388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.013413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.013582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.013607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.013765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.013790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.013955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.013979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.014144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.014169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.014312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.014337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.014501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.014525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.014700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.014725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.014866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.014891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.015036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.015062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.015239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.015264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.015409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.015435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.015605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.015631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.015793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.015818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.015975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.016001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.708 [2024-07-25 19:18:28.016161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.708 [2024-07-25 19:18:28.016188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.708 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.016343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.016368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.016505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.016530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.016701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.016726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.016903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.016928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.017079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.017109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.017251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.017276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.017414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.017439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.017611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.017635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.017788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.017812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.017962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.017987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.018129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.018154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.018293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.018318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.018457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.018482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.018622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.018649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.018792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.018817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.018966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.018991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.019165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.019191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.019360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.019384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.019539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.019564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.019736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.019762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.019907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.019931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.020084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.020117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.020264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.020289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.020438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.020463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.020621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.020645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.020792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.020818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.020968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.020992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.021135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.021160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.021302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.021327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.021476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.021502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.021653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.021682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.021829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.021857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.022000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.022024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.022173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.022199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.022342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.022368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.022544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.022569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.709 [2024-07-25 19:18:28.022710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.709 [2024-07-25 19:18:28.022734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.709 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.022888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.022913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.023083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.023117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.023265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.023290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.023435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.023460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.023598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.023623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.023772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.023797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.023938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.023962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.024123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.024149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.024344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.024369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.024544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.024568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.024710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.024735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.024905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.024931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.025072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.025097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.025268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.025293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.025463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.025488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.025642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.025666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.025817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.025848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.026029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.026058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.026228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.026258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.026434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.026464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.026645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.026682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.026828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.026854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.027008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.027032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.027171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.027197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.027359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.027385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.027548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.027572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.027741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.027766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.027905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.027930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.028130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.028156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.028303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.028329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.028499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.028524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.028693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.028718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.028858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.028883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.029050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.710 [2024-07-25 19:18:28.029076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.710 qpair failed and we were unable to recover it. 00:27:35.710 [2024-07-25 19:18:28.029275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.029300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.029466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.029491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.029639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.029665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.029833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.029858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.030041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.030066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.030221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.030246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.030390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.030415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.030581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.030606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.030754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.030778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.030970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.030995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.031145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.031171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.031312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.031337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.031504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.031529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.031688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.031713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.031881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.031905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.032076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.032116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.032273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.032297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.032441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.032466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.032631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.032656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.032829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.032855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.032999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.033025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.033183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.033209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.033374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.033399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.033536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.033561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.033735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.033760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.033917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.033941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.034092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.034122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.034273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.034299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.034445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.034469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.034636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.034661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.034802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.034827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.034963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.034987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.035135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.035160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.035324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.035350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.035513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.035538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.711 [2024-07-25 19:18:28.035707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.711 [2024-07-25 19:18:28.035732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.711 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.035875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.035901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.036035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.036061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.036240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.036266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.036441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.036466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.036617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.036642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.036819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.036844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.037016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.037041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.037182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.037208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.037373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.037398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.037567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.037592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.037749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.037775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.037951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.037976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.038123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.038149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.038291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.038318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.038460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.038484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.038617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.038641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.038812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.038837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.039002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.039026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.039187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.039221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.039397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.039422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.039608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.039633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.039772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.039796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.039941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.039966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.040164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.040189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.040351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.040377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.040552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.040578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.040735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.040760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.040910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.040936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.041109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.041135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.041340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.041365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.041515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.041540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.041684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.041709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.041908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.041933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.042077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.042107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.042276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.042302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.042438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.042462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.712 [2024-07-25 19:18:28.042628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.712 [2024-07-25 19:18:28.042653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.712 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.042819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.042845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.043019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.043044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.043217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.043243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.043394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.043418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.043553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.043579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.043743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.043767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.043918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.043942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.044112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.044137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.044283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.044312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.044486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.044510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.044676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.044701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.044841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.044868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.045013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.045038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.045206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.045231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.045382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.045407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.045565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.045590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.045730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.045755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.045908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.045935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.046111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.046138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.046275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.046300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.046447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.046472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.046648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.046674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.046878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.046903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.047047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.047073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.047251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.047277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.047420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.047447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.047583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.047607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.047802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.047827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.048001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.048027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.713 qpair failed and we were unable to recover it. 00:27:35.713 [2024-07-25 19:18:28.048192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.713 [2024-07-25 19:18:28.048219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.048363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.048388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.048554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.048578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.048713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.048738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.048904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.048928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.049098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.049127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.049300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.049329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.049498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.049524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.049679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.049704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.049862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.049886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.050047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.050073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.050223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.050249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.050419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.050444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.050588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.050613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.050784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.050810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.050948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.050973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.051147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.051173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.051326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.051351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.051493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.051518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.051679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.051704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.051849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.051874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.052026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.052051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.052211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.052237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.052382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.052407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.052556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.052580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.052740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.052766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.052940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.052965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.053131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.053156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.053292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.053318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.053477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.053502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.053642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.053667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.053802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.053827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.053991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.054017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.054176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.054202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.054375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.054400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.054545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.714 [2024-07-25 19:18:28.054570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.714 qpair failed and we were unable to recover it. 00:27:35.714 [2024-07-25 19:18:28.054712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.054738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.054903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.054928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.055096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.055126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.055303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.055329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.055502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.055528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.055696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.055721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.055889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.055914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.056052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.056078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.056259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.056285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.056426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.056452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.056615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.056640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.056791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.056816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.056979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.057004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.057181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.057208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.057359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.057385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.057560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.057585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.057748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.057773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.057936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.057961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.058136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.058162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.058311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.058337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.058508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.058533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.058675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.058700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.058866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.058892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.059068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.059093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.059245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.059270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.059419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.059445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.059613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.059639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.059803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.059827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.060001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.060027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.060165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.060191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.060363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.060387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.060556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.060580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.060729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.060756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.060895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.060921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.715 [2024-07-25 19:18:28.061119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-25 19:18:28.061144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.715 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.061329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.061354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.061507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.061533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.061703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.061728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.061867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.061896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.062071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.062096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.062251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.062276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.062453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.062478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.062619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.062643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.062807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.062833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.062981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.063005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.063179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.063204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.063358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.063383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.063557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.063583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.063731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.063756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.063951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.063976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.064114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.064139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.064283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.064308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.064459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.064484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.064624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.064650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.064808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.064833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.065030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.065055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.065192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.065217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.065382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.065408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.065547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.065573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.065744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.065768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.065961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.065987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.066154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.066181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.066376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.066402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.066579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.066603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.066766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.066790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.066941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.066970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.067114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.067140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.067332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.067357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.067489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.716 [2024-07-25 19:18:28.067514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.716 qpair failed and we were unable to recover it. 00:27:35.716 [2024-07-25 19:18:28.067667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.067692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.067837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.067862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.068016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.068041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.068180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.068205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.068347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.068371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.068525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.068550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.068701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.068726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.068923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.068949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.069083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.069113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.069275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.069301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.069471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.069496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.069648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.069673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.069819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.069844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.070012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.070037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.070194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.070220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.070386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.070411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.070549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.070575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.070717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.070743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.070893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.070917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.071082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.071123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.071282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.071308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.071459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.071484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.071627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.071653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.071800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.071825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.071997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.072022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.072177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.072202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.072343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.072369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.072536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.072561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.072707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.072731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.072900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.072926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.073079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.073109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.073287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.073311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.073449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.073473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.073651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.073676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.073842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.073867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.074019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.074044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.074190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.074216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.074384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.074410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.074545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.717 [2024-07-25 19:18:28.074569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.717 qpair failed and we were unable to recover it. 00:27:35.717 [2024-07-25 19:18:28.074708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.074733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.074904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.074929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.075073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.075099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.075289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.075314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.075456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.075480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.075631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.075656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.075802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.075827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.075988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.076013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.076154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.076180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.076324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.076349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.076495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.076520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.076707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.076732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.076911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.076936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.077087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.077118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.077284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.077308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.077473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.077498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.077684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.077709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.077867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.077892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.078060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.078085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.078239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.078264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.078450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.078475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.078649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.078674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.078814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.078838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.079007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.079033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.079181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.079207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.079363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.079392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.079557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.079583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.079728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.079754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.079906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.079931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.080074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.080099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.080245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.080270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.080437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.080462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.080603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.080630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.080792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.080817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.718 [2024-07-25 19:18:28.081000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.718 [2024-07-25 19:18:28.081026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.718 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.081180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.081207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.081359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.081384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.081555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.081581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.081736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.081761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.081903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.081928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.082100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.082130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.082300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.082326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.082477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.082501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.082642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.082668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.082808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.082834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.082991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.083016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.083169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.083196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.083329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.083355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.083496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.083521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.083676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.083702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.083847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.083872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.084040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.084064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.084214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.084245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.084409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.084434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.084612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.084638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.084834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.084859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.085031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.085057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.085226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.085252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.085391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.085416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.085583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.085608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.085745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.085770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.085916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.085941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.086081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.086111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.086280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.086305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.086446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.086471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.086611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.086635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.086787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.086813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.087002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.087028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.087194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.087220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.087395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.087419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.087565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.087590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.087732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.087757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.087901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.087927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.088096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.088131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.088274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.088299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.088439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.719 [2024-07-25 19:18:28.088464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.719 qpair failed and we were unable to recover it. 00:27:35.719 [2024-07-25 19:18:28.088648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.088673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.088826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.088850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.089004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.089029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.089173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.089203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.089368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.089393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.089538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.089563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.089702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.089727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.089879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.089903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.090060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.090085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.090268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.090294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.090433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.090459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.090626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.090651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.090788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.090814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.090950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.090975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.091130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.091157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.091324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.091349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.091494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.091520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.091665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.091690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.091845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.091870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.092038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.092062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.092225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.092251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.092396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.092421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.092598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.092623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.092779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.092805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.092974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.092999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.093172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.093197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.093344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.093369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.093540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.093566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.093750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.093776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.093960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.093986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.094157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.094182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.094329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.094354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.094494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.094519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.094674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.094699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.094874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.094901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.095050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.095076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.095263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.095289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.095435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.095459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.095602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.095626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.095793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.095818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.095974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.095998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.720 [2024-07-25 19:18:28.096170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.720 [2024-07-25 19:18:28.096196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.720 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.096360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.096386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.096541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.096566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.096735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.096760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.096922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.096946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.097105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.097130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.097286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.097311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.097481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.097506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.097654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.097679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.097873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.097898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.098039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.098063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.098224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.098249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.098417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.098442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.098614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.098640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.098812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.098836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.099006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.099030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.099209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.099235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.099379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.099405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.099541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.099565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.099724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.099749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.099903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.099929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.100115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.100141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.100294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.100320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.100486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.100512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.100706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.100731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.100882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.100907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.101054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.101079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.101236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.101261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.101408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.101433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.101590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.101615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.101812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.101841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.101988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.102012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.102155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.102181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.102374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.102399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.102541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.102566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.102721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.102746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.102892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.102917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.103062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.103087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.103231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.103257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.103426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.103451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.103618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.103643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.721 [2024-07-25 19:18:28.103799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.721 [2024-07-25 19:18:28.103824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.721 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.103987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.104011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.104150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.104175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.104325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.104351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.104501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.104526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.104683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.104707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.104877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.104902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.105038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.105063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.105233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.105258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.105411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.105435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.105594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.105619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.105786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.105810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.105981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.106007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.106176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.106202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.106345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.106371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.106511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.106536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.106684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.106713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.106880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.106905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.107053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.107079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.107243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.107269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.107416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.107441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.107610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.107634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.107790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.107814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.107967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.107992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.108147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.108172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.108346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.108370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.108512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.108537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.108709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.108734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.108890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.108915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.109067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.109091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.109274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.109300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.109439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.109464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.109636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.109661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.109822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.109847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.110003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.110028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.110177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.110203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.110354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.722 [2024-07-25 19:18:28.110379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.722 qpair failed and we were unable to recover it. 00:27:35.722 [2024-07-25 19:18:28.110552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.110577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.110739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.110765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.110900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.110925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.111064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.111090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.111269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.111295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.111440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.111465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.111641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.111666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.111841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.111868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.112044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.112069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.112233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.112259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.112401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.112426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.112597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.112623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.112792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.112817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.112970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.112995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.113135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.113161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.113306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.113333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.113523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.113548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.113690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.113715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.113878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.113903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.114048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.114075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.114233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.114270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.114446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.114476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.114626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.114652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.114815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.114840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.114992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.115017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.115179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.115206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.115364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.115391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.723 [2024-07-25 19:18:28.115555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.723 [2024-07-25 19:18:28.115581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.723 qpair failed and we were unable to recover it. 00:27:35.724 [2024-07-25 19:18:28.115765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.724 [2024-07-25 19:18:28.115791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.724 qpair failed and we were unable to recover it. 00:27:35.724 [2024-07-25 19:18:28.115948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.724 [2024-07-25 19:18:28.115975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.724 qpair failed and we were unable to recover it. 00:27:35.724 [2024-07-25 19:18:28.116145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.724 [2024-07-25 19:18:28.116175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:35.724 qpair failed and we were unable to recover it. 00:27:35.724 [2024-07-25 19:18:28.116232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x120b230 (9): Bad file descriptor 00:27:35.724 [2024-07-25 19:18:28.116442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.724 [2024-07-25 19:18:28.116488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:35.724 qpair failed and we were unable to recover it. 00:27:35.724 [2024-07-25 19:18:28.116654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.724 [2024-07-25 19:18:28.116682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:35.724 qpair failed and we were unable to recover it. 00:27:35.724 [2024-07-25 19:18:28.116864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.724 [2024-07-25 19:18:28.116892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:35.724 qpair failed and we were unable to recover it. 00:27:35.724 [2024-07-25 19:18:28.117042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.724 [2024-07-25 19:18:28.117067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:35.724 qpair failed and we were unable to recover it. 00:27:35.724 [2024-07-25 19:18:28.117226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.724 [2024-07-25 19:18:28.117253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:35.724 qpair failed and we were unable to recover it. 00:27:35.724 [2024-07-25 19:18:28.117391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.724 [2024-07-25 19:18:28.117416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:35.724 qpair failed and we were unable to recover it. 00:27:35.724 [2024-07-25 19:18:28.117588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.724 [2024-07-25 19:18:28.117614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:35.724 qpair failed and we were unable to recover it. 00:27:35.724 [2024-07-25 19:18:28.117768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.724 [2024-07-25 19:18:28.117802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:35.724 qpair failed and we were unable to recover it. 00:27:35.724 [2024-07-25 19:18:28.117973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.724 [2024-07-25 19:18:28.118001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:35.724 qpair failed and we were unable to recover it. 00:27:35.724 [2024-07-25 19:18:28.118158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.724 [2024-07-25 19:18:28.118185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:35.724 qpair failed and we were unable to recover it. 00:27:36.007 [2024-07-25 19:18:28.118332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.007 [2024-07-25 19:18:28.118358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.007 qpair failed and we were unable to recover it. 00:27:36.007 [2024-07-25 19:18:28.118505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.007 [2024-07-25 19:18:28.118530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.007 qpair failed and we were unable to recover it. 00:27:36.007 [2024-07-25 19:18:28.118677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.007 [2024-07-25 19:18:28.118703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.007 qpair failed and we were unable to recover it. 00:27:36.007 [2024-07-25 19:18:28.118845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.007 [2024-07-25 19:18:28.118870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.007 qpair failed and we were unable to recover it. 00:27:36.007 [2024-07-25 19:18:28.119010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.007 [2024-07-25 19:18:28.119036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.007 qpair failed and we were unable to recover it. 00:27:36.007 [2024-07-25 19:18:28.119186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.007 [2024-07-25 19:18:28.119217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.007 qpair failed and we were unable to recover it. 00:27:36.007 [2024-07-25 19:18:28.119367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.007 [2024-07-25 19:18:28.119392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.007 qpair failed and we were unable to recover it. 00:27:36.007 [2024-07-25 19:18:28.119540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.119566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.119707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.119732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.119885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.119912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.120060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.120087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.120251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.120281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.120430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.120456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.120600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.120625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.120783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.120809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.120968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.120994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.121144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.121171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.121312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.121338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.121504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.121530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.121694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.121720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.121871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.121897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.122037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.122063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.122220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.122246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.122391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.122417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.122564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.122591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.122734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.122760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.122925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.122950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.123086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.123117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.123268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.123293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.123463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.123488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.123631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.123657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.123806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.123832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.123974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.124004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.124142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.124168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.124314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.124341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.124513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.124539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.124698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.124724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.124868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.124894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.125040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.125066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.125222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.125248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.125424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.125450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.125589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.125615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.125772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.008 [2024-07-25 19:18:28.125797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.008 qpair failed and we were unable to recover it. 00:27:36.008 [2024-07-25 19:18:28.125943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.125968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.126133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.126159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.126300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.126326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.126475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.126501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.126644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.126669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.126838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.126863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.127007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.127032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.127177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.127202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.127368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.127394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.127566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.127592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.127740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.127765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.127918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.127943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.128091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.128130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.128290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.128316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.128457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.128482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.128623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.128649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.128790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.128816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.128964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.128988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.129151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.129177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.129342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.129367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.129524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.129550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.129695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.129720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.129888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.129914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.130058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.130083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.130262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.130287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.130483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.130509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.130663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.130689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.130862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.130887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.131036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.131062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.131235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.131260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.131419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.131449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.131624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.131649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.131790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.131814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.131992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.132017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.132185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.132211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.132351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.132375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.132524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.132550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.009 [2024-07-25 19:18:28.132697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.009 [2024-07-25 19:18:28.132722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.009 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.132887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.132911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.133058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.133082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.133226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.133252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.133394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.133419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.133561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.133586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.133753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.133778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.133922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.133947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.134091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.134128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.134303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.134330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.134472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.134499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.134648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.134674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.134861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.134886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.135052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.135077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.135259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.135285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.135429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.135454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.135595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.135620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.135810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.135835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.136011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.136036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.136198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.136223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.136377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.136406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.136566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.136604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.136752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.136779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.136954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.136980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.137154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.137180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.137350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.137375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.137525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.137551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.137703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.137729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.137897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.137923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.138080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.138110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.138284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.138310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.138473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.138498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.138669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.138695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.138844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.138871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.139024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.139051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.139211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.139237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.139402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.139428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.139573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.139598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.010 qpair failed and we were unable to recover it. 00:27:36.010 [2024-07-25 19:18:28.139772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.010 [2024-07-25 19:18:28.139797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.139965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.139991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.140164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.140191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.140345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.140370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.140524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.140549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.140747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.140772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.140934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.140959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.141155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.141181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.141339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.141363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.141505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.141534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.141710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.141736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.141886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.141910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.142052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.142077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.142222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.142248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.142398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.142423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.142566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.142591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.142767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.142792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.142933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.142957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.143121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.143146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.143285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.143310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.143450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.143476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.143667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.143691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.143860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.143885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.144086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.144118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.144262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.144287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.144428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.144454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.144595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.144620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.144828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.144853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.145011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.145035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.145195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.145221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.145365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.145391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.145536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.145561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.145700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.145725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.145922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.145947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.146090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.146120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.146258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.146282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.146446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.146476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.146651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.011 [2024-07-25 19:18:28.146676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.011 qpair failed and we were unable to recover it. 00:27:36.011 [2024-07-25 19:18:28.146848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.146873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.147017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.147043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.147247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.147287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.147468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.147495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.147647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.147674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.147816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.147845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.147990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.148016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.148172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.148199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.148343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.148370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.148531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.148556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.148725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.148751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.148920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.148946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.149111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.149138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.149313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.149339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.149482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.149507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.149649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.149676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.149831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.149856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.150037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.150064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.150215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.150241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.150392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.150417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.150576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.150600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.150735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.150761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.150933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.150958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.151099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.151130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.151272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.151298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.151449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.151478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.151620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.151645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.151804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.151830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.152013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.152038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.012 qpair failed and we were unable to recover it. 00:27:36.012 [2024-07-25 19:18:28.152176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.012 [2024-07-25 19:18:28.152202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.152350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.152375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.152526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.152551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.152739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.152764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.152923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.152948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.153087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.153119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.153262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.153287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.153469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.153493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.153630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.153655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.153827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.153853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.154001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.154027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.154189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.154216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.154354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.154380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.154552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.154578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.154739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.154764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.154926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.154952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.155094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.155125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.155293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.155318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.155486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.155510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.155682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.155708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.155883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.155909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.156077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.156110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.156278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.156303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.156470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.156495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.156676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.156701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.156845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.156870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.157020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.157046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.157220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.157246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.157393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.157418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.157600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.157625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.157768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.157797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.157944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.157970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.158130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.158157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.158299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.158324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.158472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.158499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.158698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.158724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.158871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.158896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.159065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.013 [2024-07-25 19:18:28.159091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.013 qpair failed and we were unable to recover it. 00:27:36.013 [2024-07-25 19:18:28.159248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.159274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.159476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.159502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.159657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.159683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.159820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.159845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.160010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.160035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.160184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.160211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.160359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.160391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.160561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.160586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.160733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.160758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.160935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.160962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.161122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.161162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.161340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.161367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.161518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.161547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.161693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.161719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.161884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.161910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.162087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.162119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.162293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.162319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.162491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.162517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.162675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.162701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.162873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.162898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.163070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.163095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.163265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.163293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.163437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.163462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.163604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.163629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.163786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.163812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.163974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.164000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.164158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.164186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.164363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.164390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.164529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.164555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.164706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.164732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.164886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.164913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.165059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.165084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.165242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.165267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.165444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.165470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.165638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.165663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.165806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.165831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.165983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.166008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.166212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.014 [2024-07-25 19:18:28.166238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.014 qpair failed and we were unable to recover it. 00:27:36.014 [2024-07-25 19:18:28.166393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.166419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.166565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.166596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.166762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.166788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.166938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.166965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.167127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.167154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.167292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.167318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.167493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.167518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.167666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.167691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.167846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.167872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.168029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.168055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.168207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.168233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.168401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.168426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.168570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.168595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.168744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.168769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.168913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.168939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.169086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.169119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.169289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.169315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.169474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.169499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.169649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.169675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.169849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.169875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.170045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.170071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.170212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.170238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.170403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.170429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.170589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.170614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.170752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.170778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.170945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.170971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.171142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.171172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.171348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.171374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.171528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.171554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.171701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.171726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.171880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.171905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.172078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.172108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.172256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.172281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.172422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.172447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.172588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.172613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.172784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.172809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.172952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.172978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.015 [2024-07-25 19:18:28.173135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.015 [2024-07-25 19:18:28.173161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.015 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.173338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.173365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.173513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.173540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.173708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.173733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.173891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.174032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.174218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.174244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.174397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.174422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.174610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.174637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.174784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.174810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.174990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.175016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.175189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.175216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.175365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.175391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.175571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.175596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.175760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.175785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.175946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.175971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.176162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.176188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.176356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.176382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.176523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.176549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.176695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.176720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.176914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.176940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.177085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.177117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.177275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.177301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.177467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.177492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.177638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.177664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.177809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.177836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.178035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.178060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.178203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.178229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.178372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.178397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.178566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.178592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.178743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.178770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.178939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.178965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.179116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.179142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.179301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.179326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.179466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.179492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.179660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.179685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.179870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.179896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.180053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.180078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.180249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.016 [2024-07-25 19:18:28.180276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.016 qpair failed and we were unable to recover it. 00:27:36.016 [2024-07-25 19:18:28.180440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.180466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.180633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.180660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.180856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.180882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.181048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.181073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.181241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.181267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.181409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.181435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.181578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.181608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.181754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.181780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.181938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.181963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.182116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.182143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.182316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.182342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.182515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.182541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.182689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.182714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.182891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.182916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.183061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.183087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.183237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.183262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.183405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.183431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.183589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.183614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.183808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.183848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.184003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.184031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.184192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.184221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.184393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.184420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.184565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.184591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.184779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.184805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.184963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.184989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.185145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.185179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.185353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.185381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.185562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.185589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.185754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.185780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.185920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.185946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.186111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.186138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.017 qpair failed and we were unable to recover it. 00:27:36.017 [2024-07-25 19:18:28.186290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.017 [2024-07-25 19:18:28.186316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.186500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.186526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.186713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.186739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.186894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.186920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.187068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.187093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.187244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.187270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.187415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.187442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.187579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.187606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.187752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.187779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.187953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.187979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.188124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.188151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.188325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.188351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.188505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.188531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.188679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.188707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.188861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.188887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.189049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.189079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.189241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.189280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.189450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.189477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.189650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.189675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.189815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.189840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.190007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.190032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.190203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.190230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.190368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.190394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.190562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.190587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.190757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.190789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.190963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.190992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.191146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.191184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.191337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.191364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.191499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.191525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.191676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.191702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.191868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.191894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.192034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.192060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.192236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.192262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.192411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.192437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.192611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.192637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.192786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.192811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.192948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.192974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.193147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.018 [2024-07-25 19:18:28.193174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.018 qpair failed and we were unable to recover it. 00:27:36.018 [2024-07-25 19:18:28.193316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.193341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.193500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.193526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.193679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.193705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.193850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.193875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.194053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.194083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.194235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.194262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.194411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.194436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.194608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.194634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.194801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.194825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.194995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.195020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.195191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.195219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.195358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.195384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.195521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.195546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.195716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.195742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.195885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.195912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.196058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.196084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.196241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.196269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.196424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.196450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.196597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.196622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.196787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.196813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.196953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.196979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.197161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.197206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.197400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.197437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.197609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.197637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.197797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.197832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.197992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.198019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.198189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.198217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.198362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.198387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.198563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.198589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.198730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.198757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.198929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.198955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.199120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.199147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.199294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.199321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.199495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.199521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.199686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.199711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.199848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.199873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.200021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.200047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.019 qpair failed and we were unable to recover it. 00:27:36.019 [2024-07-25 19:18:28.200198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.019 [2024-07-25 19:18:28.200224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.200401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.200427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.200570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.200595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.200731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.200756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.200892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.200917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.201085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.201116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.201257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.201283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.201424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.201455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.201614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.201640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.201802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.201827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.201965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.201991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.202155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.202181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.202360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.202385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.202553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.202578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.202721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.202746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.202912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.202938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.203081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.203114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.203291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.203316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.203481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.203506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.203667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.203692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.203863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.203888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.204039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.204065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.204239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.204266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.204463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.204489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.204645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.204671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.204817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.204843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.205015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.205055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.205240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.205269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.205442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.205468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.205607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.205632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.205811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.205837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.205992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.206018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.206162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.206190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.206360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.206386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.206565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.206591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.206731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.206758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.206905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.206932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.207076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.020 [2024-07-25 19:18:28.207109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.020 qpair failed and we were unable to recover it. 00:27:36.020 [2024-07-25 19:18:28.207262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.207288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.207459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.207486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.207624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.207650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.207817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.207843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.208005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.208032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.208197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.208224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.208364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.208390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.208544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.208571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.208717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.208743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.208876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.208906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.209054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.209080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.209242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.209269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.209444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.209472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.209651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.209678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.209811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.209837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.210001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.210027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.210170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.210196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.210367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.210393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.210546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.210572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.210716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.210742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.210882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.210909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.211092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.211137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.211295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.211324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.211504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.211530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.211701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.211727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.211895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.211921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.212059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.212085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.212253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.212279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.212432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.212458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.212620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.212645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.212818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.212843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.213003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.021 [2024-07-25 19:18:28.213029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.021 qpair failed and we were unable to recover it. 00:27:36.021 [2024-07-25 19:18:28.213196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.213223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.213366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.213393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.213594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.213620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.213764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.213789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.213942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.213968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.214118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.214145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.214308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.214334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.214491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.214517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.214702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.214729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.214875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.214901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.215110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.215159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.215317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.215345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.215489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.215516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.215656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.215681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.215856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.215883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.216056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.216082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.216239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.216266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.216415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.216446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.216589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.216615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.216805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.216831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.216996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.217021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.217163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.217189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.217338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.217365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.217528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.217555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.217692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.217717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.217912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.217939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.218085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.218118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.218295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.218321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.218493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.218518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.218667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.218695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.218870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.218896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.219069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.219095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.219257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.219283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.219433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.219459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.219616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.219643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.219822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.219848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.220006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.220033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.022 [2024-07-25 19:18:28.220207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.022 [2024-07-25 19:18:28.220235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.022 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.220381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.220408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.220551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.220577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.220710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.220736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.220882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.220907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.221049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.221075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.221251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.221277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.221430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.221457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.221594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.221620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.221756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.221782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.221969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.221993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.222138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.222164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.222304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.222331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.222500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.222526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.222678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.222703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.222875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.222901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.223048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.223074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.223251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.223277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.223445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.223471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.223648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.223675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.223818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.223849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.224026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.224052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.224200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.224226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.224398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.224425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.224577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.224603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.224769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.224795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.224964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.224989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.225141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.225169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.225315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.225341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.225514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.225539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.225723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.225749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.225919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.225945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.226090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.226121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.226287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.226313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.226459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.226485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.226629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.226656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.226849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.226875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.227020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.023 [2024-07-25 19:18:28.227046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.023 qpair failed and we were unable to recover it. 00:27:36.023 [2024-07-25 19:18:28.227210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.227238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.227408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.227434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.227580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.227605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.227769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.227795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.227947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.227972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.228123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.228149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.228285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.228311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.228479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.228505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.228704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.228730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.228879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.228904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.229075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.229110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.229257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.229285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.229439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.229464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.229597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.229623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.229800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.229827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.229967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.229992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.230139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.230174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.230319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.230346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.230510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.230536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.230695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.230721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.230862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.230888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.231025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.231051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.231226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.231258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.231424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.231450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.231638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.231665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.231839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.231865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.232040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.232065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.232226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.232252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.232400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.232427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.232571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.232596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.232783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.232808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.232957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.232983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.233139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.233168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.233331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.233358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.233502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.233527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.233704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.233731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.233904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.233931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.234111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.024 [2024-07-25 19:18:28.234137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.024 qpair failed and we were unable to recover it. 00:27:36.024 [2024-07-25 19:18:28.234280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.234307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.234466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.234491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.234638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.234664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.234810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.234836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.235025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.235051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.235209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.235237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.235431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.235457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.235631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.235656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.235790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.235817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.235983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.236009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.236197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.236223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.236396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.236422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.236597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.236623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.236791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.236817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.236983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.237008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.237167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.237194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.237333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.237359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.237527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.237553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.237723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.237749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.237893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.237919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.238061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.238088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.238259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.238287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.238462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.238487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.238628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.238653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.238822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.238852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.238997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.239022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.239189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.239217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.239364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.239391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.239551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.239577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.239736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.239763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.239909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.239936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.240126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.240153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.240351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.240378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.240527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.240554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.240727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.240753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.240889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.240915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.241057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.241083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.025 qpair failed and we were unable to recover it. 00:27:36.025 [2024-07-25 19:18:28.241261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.025 [2024-07-25 19:18:28.241289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.241470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.241497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.241671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.241697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.241869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.241896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.242063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.242089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.242270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.242296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.242430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.242456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.242609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.242634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.242779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.242804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.242943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.242969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.243117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.243148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.243293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.243319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.243487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.243514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.243660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.243686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.243841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.243867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.244036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.244062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.244210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.244236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.244395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.244421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.244566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.244592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.244730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.244755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.244918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.244944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.245114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.245140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.245317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.245344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.245492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.245518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.245662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.245690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.245833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.245859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.246018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.246044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.246222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.246253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.246427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.246454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.246591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.246618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.026 [2024-07-25 19:18:28.246778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.026 [2024-07-25 19:18:28.246804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.026 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.246948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.246975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.247146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.247174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.247349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.247374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.247550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.247575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.247721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.247747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.247891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.247917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.248091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.248131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.248287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.248315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.248455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.248481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.248640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.248667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.248864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.248889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.249029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.249054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.249216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.249243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.249418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.249444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.249598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.249622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.249773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.249798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.249947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.249974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.250141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.250167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.250337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.250362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.250527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.250553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.250713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.250739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.250887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.250912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.251085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.251118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.251281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.251307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.251492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.251518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.251674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.251702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.251840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.251865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.252033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.252059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.252209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.252236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.252377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.252403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.252549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.252575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.252747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.252773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.252923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.252949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.253087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.253120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.253285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.253311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.253455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.253481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.253633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.027 [2024-07-25 19:18:28.253663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.027 qpair failed and we were unable to recover it. 00:27:36.027 [2024-07-25 19:18:28.253831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.253856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.253993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.254018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.254163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.254190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.254347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.254372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.254531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.254556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.254704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.254731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.254873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.254898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.255072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.255097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.255251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.255277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.255426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.255452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.255618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.255643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.255816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.255842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.256020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.256046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.256207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.256234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.256374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.256400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.256580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.256606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.256753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.256779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.256955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.256980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.257140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.257166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.257306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.257333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.257477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.257502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.257679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.257705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.257870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.257895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.258052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.258079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.258251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.258278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.258445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.258470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.258635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.258661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.258798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.258824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.258966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.258993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.259148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.259176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.259328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.259354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.259551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.259576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.259729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.259753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.259893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.259919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.260090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.260121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.260258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.260283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.260462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.028 [2024-07-25 19:18:28.260488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.028 qpair failed and we were unable to recover it. 00:27:36.028 [2024-07-25 19:18:28.260647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.260673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.260845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.260870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.261044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.261073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.261249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.261274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.261452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.261478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.261657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.261684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.261842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.261867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.262032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.262056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.262216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.262242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.262411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.262436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.262581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.262606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.262744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.262770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.262912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.262936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.263090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.263129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.263277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.263303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.263504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.263530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.263705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.263730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.263883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.263908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.264082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.264127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.264270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.264296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.264470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.264496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.264667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.264692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.264863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.264888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.265034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.265060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.265250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.265276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.265417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.265443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.265623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.265649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.265807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.265833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.265975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.266002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.266155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.266181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.266326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.266351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.266489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.266515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.266665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.266691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.266844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.266871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.267014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.267039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.267179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.267205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.267366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.267393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.029 qpair failed and we were unable to recover it. 00:27:36.029 [2024-07-25 19:18:28.267570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.029 [2024-07-25 19:18:28.267596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.267741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.267767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.267932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.267958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.268099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.268132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.268291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.268316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.268465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.268494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.268631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.268657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.268831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.268857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.269028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.269052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.269223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.269250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.269398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.269424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.269571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.269597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.269739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.269764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.269937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.269963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.270109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.270136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.270306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.270332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.270479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.270505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.270646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.270671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.270816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.270842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.270996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.271023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.271169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.271197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.271342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.271369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.271541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.271567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.271749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.271775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.271921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.271947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.272122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.272148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.272334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.272359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.272497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.272522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.272671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.272699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.272841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.272867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.273027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.273054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.273230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.273256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.273403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.273430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.273578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.273605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.273777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.273802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.273968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.273993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.274148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.274175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.274328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.030 [2024-07-25 19:18:28.274356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.030 qpair failed and we were unable to recover it. 00:27:36.030 [2024-07-25 19:18:28.274523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.274548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.274717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.274742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.274920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.274946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.275120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.275149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.275306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.275331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.275508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.275533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.275679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.275706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.275846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.275876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.276065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.276090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.276280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.276306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.276450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.276475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.276612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.276637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.276804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.276829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.276976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.277002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.277183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.277211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.277352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.277378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.277519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.277545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.277699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.277726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.277917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.277942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.278078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.278109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.278282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.278308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.278475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.278501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.278649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.278675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.278877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.278903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.279047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.279074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.279258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.279285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.279456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.279483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.279635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.279663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.279808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.279834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.279978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.280005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.280155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.280183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.031 qpair failed and we were unable to recover it. 00:27:36.031 [2024-07-25 19:18:28.280339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.031 [2024-07-25 19:18:28.280365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.280533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.280559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.280698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.280723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.280884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.280909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.281051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.281077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.281252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.281279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.281449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.281475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.281653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.281679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.281819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.281846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.282044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.282070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.282241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.282267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.282406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.282431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.282605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.282631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.282831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.282857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.283000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.283026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.283199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.283227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.283387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.283414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.283559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.283585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.283749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.283775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.283918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.283944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.284131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.284158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.284304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.284330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.284486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.284512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.284682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.284709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.284847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.284873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.285017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.285043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.285203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.285229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.285367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.285392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.285538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.285564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.285706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.285733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.285874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.285900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.286046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.286073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.286247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.286274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.286448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.286473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.286621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.286647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.286785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.286811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.286962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.286989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.287162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.032 [2024-07-25 19:18:28.287189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.032 qpair failed and we were unable to recover it. 00:27:36.032 [2024-07-25 19:18:28.287346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.287373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.287510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.287536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.287681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.287707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.287855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.287882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.288023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.288049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.288194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.288226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.288366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.288392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.288538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.288564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.288731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.288757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.288926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.288953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.289085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.289116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.289272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.289298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.289495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.289521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.289678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.289704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.289876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.289902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.290056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.290081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.290238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.290265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.290434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.290459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.290597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.290622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.290779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.290805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.290961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.290988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.291153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.291180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.291339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.291365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.291540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.291566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.291729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.291754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.291892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.291917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.292069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.292095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.292270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.292296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.292440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.292465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.292614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.292640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.292815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.292841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.292990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.293017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.293168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.293195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.293343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.293371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.293522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.293547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.293756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.293782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.293926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.293953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.033 [2024-07-25 19:18:28.294094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.033 [2024-07-25 19:18:28.294127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.033 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.294277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.294303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.294496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.294522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.294692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.294718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.294874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.294898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.295068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.295093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.295268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.295295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.295465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.295491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.295833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.295863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.296014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.296040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.296205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.296231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.296375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.296400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.296544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.296570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.296719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.296745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.296940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.296964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.297118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.297144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.297287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.297313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.297465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.297493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.297635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.297661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.297838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.297864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.298037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.298063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.298212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.298239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.298389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.298415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.298558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.298584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.298754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.298780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.298934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.298959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.299150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.299177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.299328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.299354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.299519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.299547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.299690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.299715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.299865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.299892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.300037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.300063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.300222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.300249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.300429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.300454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.300605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.300635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.300785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.300811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.300962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.300987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.034 qpair failed and we were unable to recover it. 00:27:36.034 [2024-07-25 19:18:28.301133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.034 [2024-07-25 19:18:28.301160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.301336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.301362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.301504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.301529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.301680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.301707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.301877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.301903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.302066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.302092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.302266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.302291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.302432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.302458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.302602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.302628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.302770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.302796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.302964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.302989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.303161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.303192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.303340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.303365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.303509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.303535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.303690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.303715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.303861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.303886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.304042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.304067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.304214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.304240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.304384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.304409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.304573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.304598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.304769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.304797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.304937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.304963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.305132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.305159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.305305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.305331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.305517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.305542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.305711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.305736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.305878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.305905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.306075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.306106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.306267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.306292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.306469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.306494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.306675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.306702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.035 [2024-07-25 19:18:28.306843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.035 [2024-07-25 19:18:28.306870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.035 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.307021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.307047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.307218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.307245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.307390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.307415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.307551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.307577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.307779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.307805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.307952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.307977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.308126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.308152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.308293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.308319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.308497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.308524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.308676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.308701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.308848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.308876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.309040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.309066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.309249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.309275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.309432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.309458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.309660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.309686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.309848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.309874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.310076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.310111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.310282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.310308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.310483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.310508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.310662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.310695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.310895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.310922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.311060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.311085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.311298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.311324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.311490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.311516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.311701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.311726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.311875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.311901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.312042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.312067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.312256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.312283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.312472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.312498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.312644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.312669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.312841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.312867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.313020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.313048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.313224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.313250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.313395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.313421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.313618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.313643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.313784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.313809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.036 [2024-07-25 19:18:28.313980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.036 [2024-07-25 19:18:28.314004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.036 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.314163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.314189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.314325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.314351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.314498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.314525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.314703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.314729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.314876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.314902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.315074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.315099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.315257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.315282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.315452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.315477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.315673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.315699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.315853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.315878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.316021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.316046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.316207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.316233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.316400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.316425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.316598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.316623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.316791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.316816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.316963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.316989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.317157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.317184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.317336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.317362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.317534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.317560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.317723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.317749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.317906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.317932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.318116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.318142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.318310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.318339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.318507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.318533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.318702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.318728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.318870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.318896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.319073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.319099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.319282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.319309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.319474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.319499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.319689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.319714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.319858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.319883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.320040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.320066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.320247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.320274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.320456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.320481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.320645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.320670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.320810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.320836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.320990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.037 [2024-07-25 19:18:28.321016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.037 qpair failed and we were unable to recover it. 00:27:36.037 [2024-07-25 19:18:28.321187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.321213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.321361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.321386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.321554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.321580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.321745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.321770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.321908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.321934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.322110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.322136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.322280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.322305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.322479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.322505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.322682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.322710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.322876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.322901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.323068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.323093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.323264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.323290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.323481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.323507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.323684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.323710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.323864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.323891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.324031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.324058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.324212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.324239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.324383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.324410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.324574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.324599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.324733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.324759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.324938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.324964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.325135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.325163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.325305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.325331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.325506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.325531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.325669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.325694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.325855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.325885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.326060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.326086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.326232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.326258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.326399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.326425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.326589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.326615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.326752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.326778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.326918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.326943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.327118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.327148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.327326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.327352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.327548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.327574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.327747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.327775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.327941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.327966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.038 [2024-07-25 19:18:28.328128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.038 [2024-07-25 19:18:28.328157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.038 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.328297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.328324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.328483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.328509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.328646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.328672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.328822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.328849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.329021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.329047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.329218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.329245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.329391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.329417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.329584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.329609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.329752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.329777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.329925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.329950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.330095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.330126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.330292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.330319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.330489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.330515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.330681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.330707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.330876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.330901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.331072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.331098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.331255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.331282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.331450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.331476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.331646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.331672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.331819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.331847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.331992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.332017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.332192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.332218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.332393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.332419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.332558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.332584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.332729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.332755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.332890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.332916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.333056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.333082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.333246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.333279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.333476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.333503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.333674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.333701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.333841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.333867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.334033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.334059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.334210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.334236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.334406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.334431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.334594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.334620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.334761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.334786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.334935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.039 [2024-07-25 19:18:28.334961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.039 qpair failed and we were unable to recover it. 00:27:36.039 [2024-07-25 19:18:28.335115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.335143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.335311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.335337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.335505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.335530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.335706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.335732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.335881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.335908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.336079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.336110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.336268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.336295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.336451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.336479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.336681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.336707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.336873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.336898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.337072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.337098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.337260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.337287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.337434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.337462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.337617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.337642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.337808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.337834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.337988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.338014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.338178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.338205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.338388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.338428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.338595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.338622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.338803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.338831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.339008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.339034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.339209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.339237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.339378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.339404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.339600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.339626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.339805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.339831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.339982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.340008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.340173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.340199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.340347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.340373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.340536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.340562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.340727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.340753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.340939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.340970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.341121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.040 [2024-07-25 19:18:28.341148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.040 qpair failed and we were unable to recover it. 00:27:36.040 [2024-07-25 19:18:28.341317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.341343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.341497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.341523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.341699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.341725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.341876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.341904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.342074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.342100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.342280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.342306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.342480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.342506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.342684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.342712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.342862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.342888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.343030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.343056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.343202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.343228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.343375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.343402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.343550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.343576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.343742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.343768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.343945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.343971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.344117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.344144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.344335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.344360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.344525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.344552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.344726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.344752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.344896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.344923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.345106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.345133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.345302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.345327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.345500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.345526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.345696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.345722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.345899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.345924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.346114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.346155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.346322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.346350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.346494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.346519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.346681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.346706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.346846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.346871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.347007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.347031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.347198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.347225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.347403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.347429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.347579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.347606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.347748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.347773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.347915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.347940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.348120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.348147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.041 [2024-07-25 19:18:28.348293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.041 [2024-07-25 19:18:28.348318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.041 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.348462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.348488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.348636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.348663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.348803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.348829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.348975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.349000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.349141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.349168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.349347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.349373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.349520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.349545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.349686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.349711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.349853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.349879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.350074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.350100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.350258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.350284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.350438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.350463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.350645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.350672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.350812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.350837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.350973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.351003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.351166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.351192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.351361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.351387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.351535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.351560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.351728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.351753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.351920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.351946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.352112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.352138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.352312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.352337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.352490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.352515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.352653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.352678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.352826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.352852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.352985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.353011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.353168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.353194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.353365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.353392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.353572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.353598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.353736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.353762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.353898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.353925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.354071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.354096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.354244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.354269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.354444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.354469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.354611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.354637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.354793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.354833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.355011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.355038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.042 [2024-07-25 19:18:28.355228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.042 [2024-07-25 19:18:28.355255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.042 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.355403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.355430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.355606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.355634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.355786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.355813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.355955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.355986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.356150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.356177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.356327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.356353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.356526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.356551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.356695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.356720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.356865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.356890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.357061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.357087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.357238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.357263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.357407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.357432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.357582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.357607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.357780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.357805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.357947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.357973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.358146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.358172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.358339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.358365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.358508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.358534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.358712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.358738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.358876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.358902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.359072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.359097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.359245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.359271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.359406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.359432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.359619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.359644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.359789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.359814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.359949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.359975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.360124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.360151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.360319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.360345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.360504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.360530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.360715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.360741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.360894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.360923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.361066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.361092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.361243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.361269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.361412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.361438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.361580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.361607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.361773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.361799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.043 qpair failed and we were unable to recover it. 00:27:36.043 [2024-07-25 19:18:28.361936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.043 [2024-07-25 19:18:28.361961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.362099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.362129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.362291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.362316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.362457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.362482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.362656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.362682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.362817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.362844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.363010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.363036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.363169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.363195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.363353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.363378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.363531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.363557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.363695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.363721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.363888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.363913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.364068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.364094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.364241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.364267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.364474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.364499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.364643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.364668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.364814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.364841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.364983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.365008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.365184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.365211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.365349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.365375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.365540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.365566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.365730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.365759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.365926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.365952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.366118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.366144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.366295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.366321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.366489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.366516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.366682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.366707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.366851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.366876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.367046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.367072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.367216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.367242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.367387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.367412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.367558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.367584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.367752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.367778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.367929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.367954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.368124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.368150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.368298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.368324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.368469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.368495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.368633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.368658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.044 qpair failed and we were unable to recover it. 00:27:36.044 [2024-07-25 19:18:28.368804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.044 [2024-07-25 19:18:28.368830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.369000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.369026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.369201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.369227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.369368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.369394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.369535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.369562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.369707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.369733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.369887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.369912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.370053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.370079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.370250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.370277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.370443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.370469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.370609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.370634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.370816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.370842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.371006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.371032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.371185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.371211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.371374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.371400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.371556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.371581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.371752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.371778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.371953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.371979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.372117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.372143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.372288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.372314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.372471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.372497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.372653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.372678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.372858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.372886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.373028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.373054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.373216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.373243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.373394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.373419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.373555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.373580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.373734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.373759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.373934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.373960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.374118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.374145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.374324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.374350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.045 [2024-07-25 19:18:28.374496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.045 [2024-07-25 19:18:28.374523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.045 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.374675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.374702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.374885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.374911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.375053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.375079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.375225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.375251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.375389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.375415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.375588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.375614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.375768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.375793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.375940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.375966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.376142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.376168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.376312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.376338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.376510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.376535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.376701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.376728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.376892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.376917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.377085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.377116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.377274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.377299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.377467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.377493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.377655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.377680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.377836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.377861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.377999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.378025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.378167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.378197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.378356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.378381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.378524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.378550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.378714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.378740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.378896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.378921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.379062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.379088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.379246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.379272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.379419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.379444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.379629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.379655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.379822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.379848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.380017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.380042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.380216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.380242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.380435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.380460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.380622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.380648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.380811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.380837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.381012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.381037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.381173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.381199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.381339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.381366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.046 [2024-07-25 19:18:28.381544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.046 [2024-07-25 19:18:28.381570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.046 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.381723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.381749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.381921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.381947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.382091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.382123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.382269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.382296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.382470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.382495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.382682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.382708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.382871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.382897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.383065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.383091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.383238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.383271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.383416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.383442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.383579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.383605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.383766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.383793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.383965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.383991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.384149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.384175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.384341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.384367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.384530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.384556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.384717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.384743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.384908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.384934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.385080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.385124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.385297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.385323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.385509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.385535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.385708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.385733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.385879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.385904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.386044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.386069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.386270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.386296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.386438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.386464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.386617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.386643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.386779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.386804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.386973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.386999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.387176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.387203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.387341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.387367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.387534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.387559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.387724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.387750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.387901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.387926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.388089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.388121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.388286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.388311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.388486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.388512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.388657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.047 [2024-07-25 19:18:28.388683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.047 qpair failed and we were unable to recover it. 00:27:36.047 [2024-07-25 19:18:28.388838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.388864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.389019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.389045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.389192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.389220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.389387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.389413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.389548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.389574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.389743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.389769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.389921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.389946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.390095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.390125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.390272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.390297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.390466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.390492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.390660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.390686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.390854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.390880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.391023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.391049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.391189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.391216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.391372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.391398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.391539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.391566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.391742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.391768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.391924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.391950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.392086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.392118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.392276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.392301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.392444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.392470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.392636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.392662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.392803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.392828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.392969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.392995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.393158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.393184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.393324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.393350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.393487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.393513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.393679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.393705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.393877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.393903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.394053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.394078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.394249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.394275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.394424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.394450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.394633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.394659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.394812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.394838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.394988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.395013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.395159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.395185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.395349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.395375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.048 qpair failed and we were unable to recover it. 00:27:36.048 [2024-07-25 19:18:28.395538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.048 [2024-07-25 19:18:28.395565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.395705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.395735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.395913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.395939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.396119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.396146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.396289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.396315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.396482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.396508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.396676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.396701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.396843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.396869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.397029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.397055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.397204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.397231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.397395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.397421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.397588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.397614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.397777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.397803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.397953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.397979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.398123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.398149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.398323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.398348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.398528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.398554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.398694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.398720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.398876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.398902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.399088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.399131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.399279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.399306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.399452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.399478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.399642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.399668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.399813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.399839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.399981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.400008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.400148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.400174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.400337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.400363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.400508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.400534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.400696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.400726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.400897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.400923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.401067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.401111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.401248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.401274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.049 [2024-07-25 19:18:28.401448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-25 19:18:28.401474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.049 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.401613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.401639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.401807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.401833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.401980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.402005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.402148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.402175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.402340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.402365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.402512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.402537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.402691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.402717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.402855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.402880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.403048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.403073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.403245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.403271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.403464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.403490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.403639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.403665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.403810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.403836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.403977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.404002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.404154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.404180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.404320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.404346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.404515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.404541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.404704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.404730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.404887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.404913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.405107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.405134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.405319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.405344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.405529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.405555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.405751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.405781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.405927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.405953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.406097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.406129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.406296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.406322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.406506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.406532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.406701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.406727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.406873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.406900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.407067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.407093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.407266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.407292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.407435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.407461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.407605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.407632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.407802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.407829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.408001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.408028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.408197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.408224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.408398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.408424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.408579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.408605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.408746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.408772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.408917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.408943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.409087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-25 19:18:28.409125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.050 qpair failed and we were unable to recover it. 00:27:36.050 [2024-07-25 19:18:28.409289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.409314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.409490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.409516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.409656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.409681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.409834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.409860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.410023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.410049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.410224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.410250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.410396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.410422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.410591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.410616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.410761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.410786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.410960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.410985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.411168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.411194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.411364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.411395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.411589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.411614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.411757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.411782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.411918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.411944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.412118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.412144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.412300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.412326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.412485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.412510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.412642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.412668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.412832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.412858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.413027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.413053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.413196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.413222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.413370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.413396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.413565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.413591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.413776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.413802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.413938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.413962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.414123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.414149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.414303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.414328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.414526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.414552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.414720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.414746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.414881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.414906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.415041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.415066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.415234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.415260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.415397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.415422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.415564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.415591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.415724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.415750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.415925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.415951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.416122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.416148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.416301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.416327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.416468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.416495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.416663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.051 [2024-07-25 19:18:28.416688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.051 qpair failed and we were unable to recover it. 00:27:36.051 [2024-07-25 19:18:28.416834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.416860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.416997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.417023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.417194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.417220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.417388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.417414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.417551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.417577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.417722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.417748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.417882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.417907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.418069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.418110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.418263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.418293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.418437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.418462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.418607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.418634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.418812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.418838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.418987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.419013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.419169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.419195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.419379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.419414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.419581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.419607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.419773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.419799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.419965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.419991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.420142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.420168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.420313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.420339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.420496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.420521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.420691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.420716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.420873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.420899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.421085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.421117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.421257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.421282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.421441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.421467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.421636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.421662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.421839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.421865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.422000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.422025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.422165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.422191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.422360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.422386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.422541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.422566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.422696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.422722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.422887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.422913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.423086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.423121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.423268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.423297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.423447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.423474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.423632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.423657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.423823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.423849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.423991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.424017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.052 qpair failed and we were unable to recover it. 00:27:36.052 [2024-07-25 19:18:28.424192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.052 [2024-07-25 19:18:28.424218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.424393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.424419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.424559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.424584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.424751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.424777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.424944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.424970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.425141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.425167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.425348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.425374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.425539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.425564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.425709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.425734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.425898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.425924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.426061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.426087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.426271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.426296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.426449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.426474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.426624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.426650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.426805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.426831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.426975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.427000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.427147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.427173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.427337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.427363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.427506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.427532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.427674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.427700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.427863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.427888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.428029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.428055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.428237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.428263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.428414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.428439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.428578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.428604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.428771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.428797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.428975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.429007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.429181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.429213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.429371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.429409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.429566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.429596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.429748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.429779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.429975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.430001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.430185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.430211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.430351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.430377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.430515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.430540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.430701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.430727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.430886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.430912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.431049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.431074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.431212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.431238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.431386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.431412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.431609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.431634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.053 [2024-07-25 19:18:28.431774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.053 [2024-07-25 19:18:28.431799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.053 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.431965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.431989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.432153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.432179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.432322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.432347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.432505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.432530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.432689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.432715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.432859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.432884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.433037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.433062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.433205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.433231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.433388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.433417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.433557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.433583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.433731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.433756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.433921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.433947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.434107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.434133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.434309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.434334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.434473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.434498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.434653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.434679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.434811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.434837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.434980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.435005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.435175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.435201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.435356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.435383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.435551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.435576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.435720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.435750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.435899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.435925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.436070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.436097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.436283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.436309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.436484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.436509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.436643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.436669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.436822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.436847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.054 [2024-07-25 19:18:28.437001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.054 [2024-07-25 19:18:28.437026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.054 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.437169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.437195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.437368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.437393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.437530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.437555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.437728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.437754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.437908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.437934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.438111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.438138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.438283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.438309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.438468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.438493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.438648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.438674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.438830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.438857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.438993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.439018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.439162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.439188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.439359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.439385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.439522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.439547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.439700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.439725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.439868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.439893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.440045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.440070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.440239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.440268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.440422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.440447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.440588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.440618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.440772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.440797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.441003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.441029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.441176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.441202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.441341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.441366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.441539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.441580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.441763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.441790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.441944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.441970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.442113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.442152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.442292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.442319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.442488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.442514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.055 [2024-07-25 19:18:28.442675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.055 [2024-07-25 19:18:28.442701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.055 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.442852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.442881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.443060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.443085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.443260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.443287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.443470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.443496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.443629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.443654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.443818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.443843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.444018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.444044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.444199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.444227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.444374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.444400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.444548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.444574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.444723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.444750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.444927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.444952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.445113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.445139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.445296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.445323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.445479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.445506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.445669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.445695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.445897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.445923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.446106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.446133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.446274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.446300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.446466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.446492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.446669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.446696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.446854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.446880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.447027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.447052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.447217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.447244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.447397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.447429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.447595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.447621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.447765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.447791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.447931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.447957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.448113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.448144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.448288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.448314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.448491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.448517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.448655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.448680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.448866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.448891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.449036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.449062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.449232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.449258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.449430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.449469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.449648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.056 [2024-07-25 19:18:28.449676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.056 qpair failed and we were unable to recover it. 00:27:36.056 [2024-07-25 19:18:28.449812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.057 [2024-07-25 19:18:28.449838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.057 qpair failed and we were unable to recover it. 00:27:36.057 [2024-07-25 19:18:28.450006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.057 [2024-07-25 19:18:28.450034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.057 qpair failed and we were unable to recover it. 00:27:36.057 [2024-07-25 19:18:28.450201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.057 [2024-07-25 19:18:28.450228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.057 qpair failed and we were unable to recover it. 00:27:36.057 [2024-07-25 19:18:28.450385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.057 [2024-07-25 19:18:28.450412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.057 qpair failed and we were unable to recover it. 00:27:36.057 [2024-07-25 19:18:28.450553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.057 [2024-07-25 19:18:28.450579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.057 qpair failed and we were unable to recover it. 00:27:36.057 [2024-07-25 19:18:28.450762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.057 [2024-07-25 19:18:28.450787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.057 qpair failed and we were unable to recover it. 00:27:36.057 [2024-07-25 19:18:28.450932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.057 [2024-07-25 19:18:28.450957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.057 qpair failed and we were unable to recover it. 00:27:36.057 [2024-07-25 19:18:28.451115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.057 [2024-07-25 19:18:28.451142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.057 qpair failed and we were unable to recover it. 00:27:36.057 [2024-07-25 19:18:28.451298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.057 [2024-07-25 19:18:28.451333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.057 qpair failed and we were unable to recover it. 00:27:36.057 [2024-07-25 19:18:28.451506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.057 [2024-07-25 19:18:28.451534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.057 qpair failed and we were unable to recover it. 00:27:36.057 [2024-07-25 19:18:28.451692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.057 [2024-07-25 19:18:28.451719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.057 qpair failed and we were unable to recover it. 00:27:36.057 [2024-07-25 19:18:28.451855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.057 [2024-07-25 19:18:28.451883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.057 qpair failed and we were unable to recover it. 00:27:36.057 [2024-07-25 19:18:28.452026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.057 [2024-07-25 19:18:28.452055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.057 qpair failed and we were unable to recover it. 00:27:36.057 [2024-07-25 19:18:28.452235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.057 [2024-07-25 19:18:28.452273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.057 qpair failed and we were unable to recover it. 00:27:36.057 [2024-07-25 19:18:28.452450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.057 [2024-07-25 19:18:28.452477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.057 qpair failed and we were unable to recover it. 00:27:36.057 [2024-07-25 19:18:28.452625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.057 [2024-07-25 19:18:28.452652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.057 qpair failed and we were unable to recover it. 00:27:36.057 [2024-07-25 19:18:28.452786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.057 [2024-07-25 19:18:28.452813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.057 qpair failed and we were unable to recover it. 00:27:36.057 [2024-07-25 19:18:28.452951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.057 [2024-07-25 19:18:28.452981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.057 qpair failed and we were unable to recover it. 00:27:36.057 [2024-07-25 19:18:28.453190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.057 [2024-07-25 19:18:28.453229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.057 qpair failed and we were unable to recover it. 00:27:36.057 [2024-07-25 19:18:28.453403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.057 [2024-07-25 19:18:28.453434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.057 qpair failed and we were unable to recover it. 00:27:36.326 [2024-07-25 19:18:28.453581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.326 [2024-07-25 19:18:28.453609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.326 qpair failed and we were unable to recover it. 00:27:36.326 [2024-07-25 19:18:28.453769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.326 [2024-07-25 19:18:28.453795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.326 qpair failed and we were unable to recover it. 00:27:36.326 [2024-07-25 19:18:28.454018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.326 [2024-07-25 19:18:28.454043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.326 qpair failed and we were unable to recover it. 00:27:36.326 [2024-07-25 19:18:28.454206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.326 [2024-07-25 19:18:28.454233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.326 qpair failed and we were unable to recover it. 00:27:36.326 [2024-07-25 19:18:28.454378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.326 [2024-07-25 19:18:28.454404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.326 qpair failed and we were unable to recover it. 00:27:36.326 [2024-07-25 19:18:28.454542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.326 [2024-07-25 19:18:28.454568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.326 qpair failed and we were unable to recover it. 00:27:36.326 [2024-07-25 19:18:28.454731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.326 [2024-07-25 19:18:28.454757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.326 qpair failed and we were unable to recover it. 00:27:36.326 [2024-07-25 19:18:28.454892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.326 [2024-07-25 19:18:28.454918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.326 qpair failed and we were unable to recover it. 00:27:36.326 [2024-07-25 19:18:28.455089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.326 [2024-07-25 19:18:28.455128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.326 qpair failed and we were unable to recover it. 00:27:36.326 [2024-07-25 19:18:28.455267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.326 [2024-07-25 19:18:28.455292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.326 qpair failed and we were unable to recover it. 00:27:36.326 [2024-07-25 19:18:28.455473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.326 [2024-07-25 19:18:28.455499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.326 qpair failed and we were unable to recover it. 00:27:36.326 [2024-07-25 19:18:28.455633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.326 [2024-07-25 19:18:28.455659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.326 qpair failed and we were unable to recover it. 00:27:36.326 [2024-07-25 19:18:28.455838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.326 [2024-07-25 19:18:28.455865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.326 qpair failed and we were unable to recover it. 00:27:36.326 [2024-07-25 19:18:28.456008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.326 [2024-07-25 19:18:28.456034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.326 qpair failed and we were unable to recover it. 00:27:36.326 [2024-07-25 19:18:28.456201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.326 [2024-07-25 19:18:28.456227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.326 qpair failed and we were unable to recover it. 00:27:36.326 [2024-07-25 19:18:28.456390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.456425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.456565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.456590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.456739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.456765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.456929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.456955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.457096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.457129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.457270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.457296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.457444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.457471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.457640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.457666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.457820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.457845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.458011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.458037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.458198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.458230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.458383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.458418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.458561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.458586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.458726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.458752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.458892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.458917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.459057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.459083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.459246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.459273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.459418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.459445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.459619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.459646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.459799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.459825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.459964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.459990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.460136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.460163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.460310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.460337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.460488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.460515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.460677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.460703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.460850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.460876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.461017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.461043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.461231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.461257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.461410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.461437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.461613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.461640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.461798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.461823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.461990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.462016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.462194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.462220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.462391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.462417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.462584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.462610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.462756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.462781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.462931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.462956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.327 [2024-07-25 19:18:28.463110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.327 [2024-07-25 19:18:28.463140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.327 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.463287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.463312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.463487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.463512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.463680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.463706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.463875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.463900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.464037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.464063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.464218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.464244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.464421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.464447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.464591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.464619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.464757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.464782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.464919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.464944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.465129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.465155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.465297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.465323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.465499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.465525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.465668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.465694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.465844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.465869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.466065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.466091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.466265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.466291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.466432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.466458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.466602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.466628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.466766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.466792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.466956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.466982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.467138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.467165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.467338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.467364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.467511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.467537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.467696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.467722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.467871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.467897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.468036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.468062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.468227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.468253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.468395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.468430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.468572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.468598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.468749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.468775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.468932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.468958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.469106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.469132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.469292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.469318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.469487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.469513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.469658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.469685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.469854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.469879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.470028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.328 [2024-07-25 19:18:28.470054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.328 qpair failed and we were unable to recover it. 00:27:36.328 [2024-07-25 19:18:28.470244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.470270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.470418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.470442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.470587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.470612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.470805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.470831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.470990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.471016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.471174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.471200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.471340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.471366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.471543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.471582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.471756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.471783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.471958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.471984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.472145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.472173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.472337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.472363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.472504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.472529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.472666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.472691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.472866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.472892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.473045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.473072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.473250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.473276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.473484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.473509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.473670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.473696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.473863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.473888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.474045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.474070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.474250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.474275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.474421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.474447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.474616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.474642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.474798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.474824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.474978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.475004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.475198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.475239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.475394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.475429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.475576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.475601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.475750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.475777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.475936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.475964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.476115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.476142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.476293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.476319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.476470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.476496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.476635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.476663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.476815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.476842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.477000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.477026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.329 qpair failed and we were unable to recover it. 00:27:36.329 [2024-07-25 19:18:28.477195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.329 [2024-07-25 19:18:28.477221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.477359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.477384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.477588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.477613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.477780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.477806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.477951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.477977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.478133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.478170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.478316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.478342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.478486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.478511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.478677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.478702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.478893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.478918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.479065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.479091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.479281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.479307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.479446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.479471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.479650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.479675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.479823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.479849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.479991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.480016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.480186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.480213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.480370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.480396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.480566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.480591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.480775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.480801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.480958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.480984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.481131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.481159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.481299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.481326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.481483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.481508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.481668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.481694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.481829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.481854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.482026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.482051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63dc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.482231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.482272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.482439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.482465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.482621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.482648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.482820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.482845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.482986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.483012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.483200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.483241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.483393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.483422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.483598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.483625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.483765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.483791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.483968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.483994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.330 qpair failed and we were unable to recover it. 00:27:36.330 [2024-07-25 19:18:28.484187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.330 [2024-07-25 19:18:28.484227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.484373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.484400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.484549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.484574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.484741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.484766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.484912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.484937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.485088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.485119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.485272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.485297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.485464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.485489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.485641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.485672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.485858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.485898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.486074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.486121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.486282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.486310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.486459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.486485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.486637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.486664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.486842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.486870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.487028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.487055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.487221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.487247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.487393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.487418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.487577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.487603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.487751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.487777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.487946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.487972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.488160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.488199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.488357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.488384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.488568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.488594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.488739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.488765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.488915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.488940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.489118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.489145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.489294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.489319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.489470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.489498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.489673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.489699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.331 [2024-07-25 19:18:28.489840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.331 [2024-07-25 19:18:28.489867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.331 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.490039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.490065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.490217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.490244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.490386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.490422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.490594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.490619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.490793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.490823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.490961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.490986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.491135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.491161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.491320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.491345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.491520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.491546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.491699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.491725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.491874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.491899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.492035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.492061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.492234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.492260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.492418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.492443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.492589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.492615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.492775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.492800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.492948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.492973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.493158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.493184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.493327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.493353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.493517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.493542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.493707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:36.332 [2024-07-25 19:18:28.493733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.493902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.493927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:27:36.332 [2024-07-25 19:18:28.494083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.494121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:36.332 [2024-07-25 19:18:28.494271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.494298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:36.332 [2024-07-25 19:18:28.494442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.494468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:36.332 [2024-07-25 19:18:28.494639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.494666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.494813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.494838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.495004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.495030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.495173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.495208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.495347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.495377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.495559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.495584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.495751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.495776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.495962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.495988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.496133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.496159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.496302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.496329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.332 [2024-07-25 19:18:28.496503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.332 [2024-07-25 19:18:28.496528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.332 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.496669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.496694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.496840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.496866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.497035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.497061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.497219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.497245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.497386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.497412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.497584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.497610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.497773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.497798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.497939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.497976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.498170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.498197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.498335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.498361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.498558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.498584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.498738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.498763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.498918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.498944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.499085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.499118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.499281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.499306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.499454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.499494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.499699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.499727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.499895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.499922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.500060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.500085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.500277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.500303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.500446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.500478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.500624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.500652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.500820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.500847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.501023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.501049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.501239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.501266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.501431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.501459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.501600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.501626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.501794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.501820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.501961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.501987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.502170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.502210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.502385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.502414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.502555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.502581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.502749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.502775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.502910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.502936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.503080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.503110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.503260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.503286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.503457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.503482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.333 qpair failed and we were unable to recover it. 00:27:36.333 [2024-07-25 19:18:28.503630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.333 [2024-07-25 19:18:28.503655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.503803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.503829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.503965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.503991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.504128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.504163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.504305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.504330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.504503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.504528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.504678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.504704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.504869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.504894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.505089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.505127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.505292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.505317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.505517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.505547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.505691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.505717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.505857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.505884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.506032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.506058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.506233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.506259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.506399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.506426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.506574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.506600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.506737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.506763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.506931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.506956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.507115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.507162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.507320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.507350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.507498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.507525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.507677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.507705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.507848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.507874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.508056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.508084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63cc000b90 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.508242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.508269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.508442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.508468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.508609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.508635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.508805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.508831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.508988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.509015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.509162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.509188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.509336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.509361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.509531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.509557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.509713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.509739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.509882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.509907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.510091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.510149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.510311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.510339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.510487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.510519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.334 qpair failed and we were unable to recover it. 00:27:36.334 [2024-07-25 19:18:28.510691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.334 [2024-07-25 19:18:28.510717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 [2024-07-25 19:18:28.510857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.510883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 [2024-07-25 19:18:28.511031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.511056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 [2024-07-25 19:18:28.511222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.511250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 [2024-07-25 19:18:28.511391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.511427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 [2024-07-25 19:18:28.511600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.511627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 [2024-07-25 19:18:28.511789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.511815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 [2024-07-25 19:18:28.511957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.511983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 [2024-07-25 19:18:28.512139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.512166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 [2024-07-25 19:18:28.512455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.512481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 [2024-07-25 19:18:28.512636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.512662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 [2024-07-25 19:18:28.512818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.512844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 [2024-07-25 19:18:28.513013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.513039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 [2024-07-25 19:18:28.513193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.513220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 [2024-07-25 19:18:28.513370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.513399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 [2024-07-25 19:18:28.513571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.513598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 [2024-07-25 19:18:28.513748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.513775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 [2024-07-25 19:18:28.513944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.513970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 [2024-07-25 19:18:28.514120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.514150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 [2024-07-25 19:18:28.514306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.514333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 [2024-07-25 19:18:28.514498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.514524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 [2024-07-25 19:18:28.514689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.514715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 [2024-07-25 19:18:28.514861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.514887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 [2024-07-25 19:18:28.515029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.515056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 [2024-07-25 19:18:28.515222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.515249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 [2024-07-25 19:18:28.515410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.515438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 [2024-07-25 19:18:28.515615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.515641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:36.335 [2024-07-25 19:18:28.515838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.515866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:36.335 [2024-07-25 19:18:28.516010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.516037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.335 [2024-07-25 19:18:28.516197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.516227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.335 qpair failed and we were unable to recover it. 00:27:36.335 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:36.335 [2024-07-25 19:18:28.516371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.335 [2024-07-25 19:18:28.516399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.516549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.516578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.516749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.516775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.516959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.516985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.517134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.517172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.517308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.517334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.517494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.517521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.517666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.517698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.517845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.517871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.518047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.518073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.518252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.518278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.518427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.518454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.518588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.518614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.518758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.518783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.518935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.518960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.519109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.519135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.519302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.519328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.519477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.519503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.519651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.519677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.519846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.519872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.520042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.520067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.520217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.520243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.520395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.520421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.520590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.520616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.520758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.520784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.520952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.520977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.521125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.521152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.521329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.521355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.521506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.521532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.521716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.521741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.521909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.521934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.522099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.522136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.522285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.522313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.522491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.522517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.522676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.522702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.522848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.522874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.523055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.523081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.523230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.336 [2024-07-25 19:18:28.523256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.336 qpair failed and we were unable to recover it. 00:27:36.336 [2024-07-25 19:18:28.523401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.523427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.523603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.523628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.523765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.523790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.523952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.523978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.524151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.524177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.524316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.524341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.524475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.524501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.524679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.524705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.524853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.524879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.525053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.525084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.525227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.525253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.525394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.525421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.525586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.525612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.525762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.525787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.525957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.525982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.526125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.526153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.526296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.526322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.526458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.526484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.526683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.526708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.526875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.526900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.527044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.527070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.527244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.527270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.527466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.527492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.527643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.527669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.527845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.527872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.528012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.528038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.528183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.528209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.528362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.528388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.528583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.528609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.528752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.528779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.528926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.528953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.529107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.529134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.529333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.529359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.529522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.529548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.529686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.529712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.529874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.529900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.530040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.530072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.337 [2024-07-25 19:18:28.530274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.337 [2024-07-25 19:18:28.530301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.337 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.530447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.530473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.530643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.530669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.530842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.530868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.531003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.531029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.531207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.531234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.531399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.531425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.531668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.531694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.531836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.531861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.532033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.532059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.532215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.532241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.532403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.532429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.532694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.532720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.532897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.532923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.533061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.533087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.533252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.533279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.533431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.533457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.533711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.533737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.533911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.533937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.534085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.534120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.534296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.534323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.534477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.534503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.534644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.534670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.534839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.534866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.535033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.535059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.535218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.535245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.535417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.535443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.535579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.535605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.535775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.535801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.535953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.535979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.536126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.536152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.536317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.536343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.536501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.536527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.536671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.536697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.536854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.536880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.537030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.537056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.537253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.537280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.537423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.338 [2024-07-25 19:18:28.537449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.338 qpair failed and we were unable to recover it. 00:27:36.338 [2024-07-25 19:18:28.537600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.537628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.537808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.537838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.537990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.538016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.538179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.538206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.538372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.538398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.538541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.538567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.538726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.538752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.538928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.538955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.539141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.539168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.539336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.539362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.539528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.539554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.539706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.539732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.539885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.539912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.540058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.540084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.540255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.540281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.540474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.540500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.540653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.540678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.540831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.540856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.540996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.541022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.541192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.541219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.541396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.541422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.541593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.541619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.541761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.541787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.541922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.541948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.542123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.542159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.542324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.542350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 Malloc0 00:27:36.339 [2024-07-25 19:18:28.542504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.542531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.542708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.542734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.339 [2024-07-25 19:18:28.542886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.542913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:36.339 [2024-07-25 19:18:28.543078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.543109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.339 [2024-07-25 19:18:28.543280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.543306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.543461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.543487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.543643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.543669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.543811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.543838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.543981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.544007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.544148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.339 [2024-07-25 19:18:28.544191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.339 qpair failed and we were unable to recover it. 00:27:36.339 [2024-07-25 19:18:28.544368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.544396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.544542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.544568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.544835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.544861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.545045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.545075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.545239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.545266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.545407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.545433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.545585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.545611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.545791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.545817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.545963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.545989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.546167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.546195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 [2024-07-25 19:18:28.546185] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.546344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.546370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.546530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.546558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.546728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.546754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.546929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.546954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.547133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.547163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.547317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.547344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.547497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.547527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.547698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.547723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.547892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.547918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.548063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.548089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.548246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.548272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.548436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.548462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.548639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.548665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.548815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.548840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.548980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.549005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.549180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.549207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.549401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.549427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.549574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.549600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.549767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.549792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.549942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.549968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.550124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.550159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.340 [2024-07-25 19:18:28.550299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.340 [2024-07-25 19:18:28.550325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.340 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.550474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.550500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.550647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.550673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.550818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.550844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.551020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.551046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.551198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.551225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.551386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.551412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.551580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.551605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.551752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.551778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.551925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.551950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.552118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.552145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.552287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.552313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.552495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.552537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.552711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.552739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.552900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.552927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.553072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.553097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.553286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.553312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.553488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.553514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.553657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.553682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.553845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.553870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.554014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.554040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.554197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.554227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.554404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.341 [2024-07-25 19:18:28.554430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:36.341 [2024-07-25 19:18:28.554589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.554616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.341 [2024-07-25 19:18:28.554759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.554786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.554954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.554980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.555130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.555159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.555323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.555348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.555523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.555548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.555694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.555721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.555859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.555885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.556058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.556085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.556250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.556276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.556449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.556475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.556638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.556664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.556804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.556831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.341 [2024-07-25 19:18:28.556983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.341 [2024-07-25 19:18:28.557010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.341 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.557172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.557199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.557340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.557366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.557556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.557583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.557726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.557751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.557897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.557924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.558082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.558127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.558315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.558342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.558485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.558511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.558656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.558682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.558850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.558875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.559050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.559077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.559260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.559286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.559427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.559453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.559600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.559626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.559809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.559834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.560009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.560035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.560212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.560238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.560387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.560413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.560583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.560608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.560770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.560795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.560992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.561017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.561205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.561231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.561365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.561391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.561558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.561584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.561729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.561754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.561912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.561938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.562073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.562098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.562250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.562276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.342 [2024-07-25 19:18:28.562427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.562454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:36.342 [2024-07-25 19:18:28.562616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.562642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.342 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:36.342 [2024-07-25 19:18:28.562820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.562845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.562994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.563021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.563211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.563252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.563457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.563484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.563631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.563657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.342 [2024-07-25 19:18:28.563846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.342 [2024-07-25 19:18:28.563872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.342 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.564037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.564062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.564229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.564257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.564406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.564434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.564612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.564638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.564777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.564803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.564970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.564995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.565143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.565170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.565332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.565359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.565508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.565534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.565673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.565698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.565866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.565892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.566062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.566088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.566243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.566270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.566420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.566447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.566609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.566635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.566778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.566805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.566952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.566979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63d4000b90 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.567167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.567208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.567391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.567418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.567572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.567598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.567745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.567770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.567904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.567930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.568100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.568133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.568280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.568305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.568481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.568506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.568642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.568667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.568832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.568857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.569013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.569038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.569205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.569232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.569399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.569424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.569604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.569630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.569777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.569802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.569956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.569982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.570159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.570185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 [2024-07-25 19:18:28.570332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.570358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.343 [2024-07-25 19:18:28.570508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.570534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.343 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:36.343 [2024-07-25 19:18:28.570696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.343 [2024-07-25 19:18:28.570722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.343 qpair failed and we were unable to recover it. 00:27:36.344 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.344 [2024-07-25 19:18:28.570899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.344 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:36.344 [2024-07-25 19:18:28.570925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.344 qpair failed and we were unable to recover it. 00:27:36.344 [2024-07-25 19:18:28.571084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.344 [2024-07-25 19:18:28.571114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.344 qpair failed and we were unable to recover it. 00:27:36.344 [2024-07-25 19:18:28.571282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.344 [2024-07-25 19:18:28.571308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.344 qpair failed and we were unable to recover it. 00:27:36.344 [2024-07-25 19:18:28.571470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.344 [2024-07-25 19:18:28.571496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.344 qpair failed and we were unable to recover it. 00:27:36.344 [2024-07-25 19:18:28.571693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.344 [2024-07-25 19:18:28.571722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.344 qpair failed and we were unable to recover it. 00:27:36.344 [2024-07-25 19:18:28.571897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.344 [2024-07-25 19:18:28.571923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.344 qpair failed and we were unable to recover it. 00:27:36.344 [2024-07-25 19:18:28.572067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.344 [2024-07-25 19:18:28.572092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.344 qpair failed and we were unable to recover it. 00:27:36.344 [2024-07-25 19:18:28.572262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.344 [2024-07-25 19:18:28.572287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.344 qpair failed and we were unable to recover it. 00:27:36.344 [2024-07-25 19:18:28.572432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.344 [2024-07-25 19:18:28.572458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.344 qpair failed and we were unable to recover it. 00:27:36.344 [2024-07-25 19:18:28.572630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.344 [2024-07-25 19:18:28.572655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.344 qpair failed and we were unable to recover it. 00:27:36.344 [2024-07-25 19:18:28.572800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.344 [2024-07-25 19:18:28.572825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.344 qpair failed and we were unable to recover it. 00:27:36.344 [2024-07-25 19:18:28.573012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.344 [2024-07-25 19:18:28.573037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.344 qpair failed and we were unable to recover it. 00:27:36.344 [2024-07-25 19:18:28.573187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.344 [2024-07-25 19:18:28.573213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.344 qpair failed and we were unable to recover it. 00:27:36.344 [2024-07-25 19:18:28.573390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.344 [2024-07-25 19:18:28.573415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.344 qpair failed and we were unable to recover it. 00:27:36.344 [2024-07-25 19:18:28.573555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.344 [2024-07-25 19:18:28.573581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.344 qpair failed and we were unable to recover it. 00:27:36.344 [2024-07-25 19:18:28.573721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.344 [2024-07-25 19:18:28.573746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.344 qpair failed and we were unable to recover it. 00:27:36.344 [2024-07-25 19:18:28.573911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.344 [2024-07-25 19:18:28.573936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.344 qpair failed and we were unable to recover it. 00:27:36.344 [2024-07-25 19:18:28.574082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.344 [2024-07-25 19:18:28.574111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.344 qpair failed and we were unable to recover it. 00:27:36.344 [2024-07-25 19:18:28.574286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.344 [2024-07-25 19:18:28.574311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd250 with addr=10.0.0.2, port=4420 00:27:36.344 qpair failed and we were unable to recover it. 00:27:36.344 [2024-07-25 19:18:28.574417] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:36.344 [2024-07-25 19:18:28.576911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.344 [2024-07-25 19:18:28.577078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.344 [2024-07-25 19:18:28.577114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.344 [2024-07-25 19:18:28.577132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.344 [2024-07-25 19:18:28.577146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.344 [2024-07-25 19:18:28.577181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.344 qpair failed and we were unable to recover it. 00:27:36.344 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.344 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:36.344 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.344 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:36.344 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.344 19:18:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1019050 00:27:36.344 [2024-07-25 19:18:28.586866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.344 [2024-07-25 19:18:28.587030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.344 [2024-07-25 19:18:28.587057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.344 [2024-07-25 19:18:28.587072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.344 [2024-07-25 19:18:28.587086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.344 [2024-07-25 19:18:28.587123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.344 qpair failed and we were unable to recover it. 00:27:36.344 [2024-07-25 19:18:28.596850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.344 [2024-07-25 19:18:28.596996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.344 [2024-07-25 19:18:28.597022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.344 [2024-07-25 19:18:28.597037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.344 [2024-07-25 19:18:28.597050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.344 [2024-07-25 19:18:28.597079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.344 qpair failed and we were unable to recover it. 00:27:36.344 [2024-07-25 19:18:28.606848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.344 [2024-07-25 19:18:28.607001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.344 [2024-07-25 19:18:28.607028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.344 [2024-07-25 19:18:28.607043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.344 [2024-07-25 19:18:28.607057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.344 [2024-07-25 19:18:28.607085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.344 qpair failed and we were unable to recover it. 00:27:36.344 [2024-07-25 19:18:28.616821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.344 [2024-07-25 19:18:28.616975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.344 [2024-07-25 19:18:28.617002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.344 [2024-07-25 19:18:28.617016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.344 [2024-07-25 19:18:28.617030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.344 [2024-07-25 19:18:28.617058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.344 qpair failed and we were unable to recover it. 00:27:36.344 [2024-07-25 19:18:28.626848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.344 [2024-07-25 19:18:28.627005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.345 [2024-07-25 19:18:28.627032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.345 [2024-07-25 19:18:28.627046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.345 [2024-07-25 19:18:28.627060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.345 [2024-07-25 19:18:28.627088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.345 qpair failed and we were unable to recover it. 00:27:36.345 [2024-07-25 19:18:28.636877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.345 [2024-07-25 19:18:28.637040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.345 [2024-07-25 19:18:28.637067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.345 [2024-07-25 19:18:28.637085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.345 [2024-07-25 19:18:28.637098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.345 [2024-07-25 19:18:28.637135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.345 qpair failed and we were unable to recover it. 00:27:36.345 [2024-07-25 19:18:28.646852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.345 [2024-07-25 19:18:28.646999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.345 [2024-07-25 19:18:28.647025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.345 [2024-07-25 19:18:28.647047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.345 [2024-07-25 19:18:28.647062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.345 [2024-07-25 19:18:28.647092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.345 qpair failed and we were unable to recover it. 00:27:36.345 [2024-07-25 19:18:28.656901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.345 [2024-07-25 19:18:28.657053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.345 [2024-07-25 19:18:28.657079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.345 [2024-07-25 19:18:28.657094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.345 [2024-07-25 19:18:28.657115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.345 [2024-07-25 19:18:28.657145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.345 qpair failed and we were unable to recover it. 00:27:36.345 [2024-07-25 19:18:28.666915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.345 [2024-07-25 19:18:28.667052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.345 [2024-07-25 19:18:28.667079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.345 [2024-07-25 19:18:28.667094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.345 [2024-07-25 19:18:28.667115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.345 [2024-07-25 19:18:28.667145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.345 qpair failed and we were unable to recover it. 00:27:36.345 [2024-07-25 19:18:28.676932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.345 [2024-07-25 19:18:28.677130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.345 [2024-07-25 19:18:28.677156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.345 [2024-07-25 19:18:28.677171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.345 [2024-07-25 19:18:28.677184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.345 [2024-07-25 19:18:28.677213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.345 qpair failed and we were unable to recover it. 00:27:36.345 [2024-07-25 19:18:28.686988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.345 [2024-07-25 19:18:28.687147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.345 [2024-07-25 19:18:28.687174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.345 [2024-07-25 19:18:28.687188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.345 [2024-07-25 19:18:28.687202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.345 [2024-07-25 19:18:28.687230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.345 qpair failed and we were unable to recover it. 00:27:36.345 [2024-07-25 19:18:28.697039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.345 [2024-07-25 19:18:28.697191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.345 [2024-07-25 19:18:28.697219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.345 [2024-07-25 19:18:28.697233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.345 [2024-07-25 19:18:28.697247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.345 [2024-07-25 19:18:28.697276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.345 qpair failed and we were unable to recover it. 00:27:36.345 [2024-07-25 19:18:28.707120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.345 [2024-07-25 19:18:28.707269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.345 [2024-07-25 19:18:28.707295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.345 [2024-07-25 19:18:28.707310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.345 [2024-07-25 19:18:28.707323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.345 [2024-07-25 19:18:28.707352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.345 qpair failed and we were unable to recover it. 00:27:36.345 [2024-07-25 19:18:28.717090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.345 [2024-07-25 19:18:28.717246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.345 [2024-07-25 19:18:28.717271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.345 [2024-07-25 19:18:28.717286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.345 [2024-07-25 19:18:28.717300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.345 [2024-07-25 19:18:28.717328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.345 qpair failed and we were unable to recover it. 00:27:36.345 [2024-07-25 19:18:28.727133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.345 [2024-07-25 19:18:28.727295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.345 [2024-07-25 19:18:28.727321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.345 [2024-07-25 19:18:28.727335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.345 [2024-07-25 19:18:28.727349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.345 [2024-07-25 19:18:28.727378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.345 qpair failed and we were unable to recover it. 00:27:36.345 [2024-07-25 19:18:28.737332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.345 [2024-07-25 19:18:28.737524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.345 [2024-07-25 19:18:28.737550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.345 [2024-07-25 19:18:28.737571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.346 [2024-07-25 19:18:28.737585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.346 [2024-07-25 19:18:28.737614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.346 qpair failed and we were unable to recover it. 00:27:36.346 [2024-07-25 19:18:28.747305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.346 [2024-07-25 19:18:28.747448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.346 [2024-07-25 19:18:28.747474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.346 [2024-07-25 19:18:28.747489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.346 [2024-07-25 19:18:28.747503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.346 [2024-07-25 19:18:28.747532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.346 qpair failed and we were unable to recover it. 00:27:36.346 [2024-07-25 19:18:28.757242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.346 [2024-07-25 19:18:28.757388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.346 [2024-07-25 19:18:28.757414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.346 [2024-07-25 19:18:28.757429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.346 [2024-07-25 19:18:28.757443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.346 [2024-07-25 19:18:28.757471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.346 qpair failed and we were unable to recover it. 00:27:36.346 [2024-07-25 19:18:28.767263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.346 [2024-07-25 19:18:28.767414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.346 [2024-07-25 19:18:28.767440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.346 [2024-07-25 19:18:28.767455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.346 [2024-07-25 19:18:28.767468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.346 [2024-07-25 19:18:28.767497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.346 qpair failed and we were unable to recover it. 00:27:36.346 [2024-07-25 19:18:28.777279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.346 [2024-07-25 19:18:28.777427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.346 [2024-07-25 19:18:28.777453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.346 [2024-07-25 19:18:28.777468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.346 [2024-07-25 19:18:28.777481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.346 [2024-07-25 19:18:28.777509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.346 qpair failed and we were unable to recover it. 00:27:36.605 [2024-07-25 19:18:28.787294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.605 [2024-07-25 19:18:28.787435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.605 [2024-07-25 19:18:28.787464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.605 [2024-07-25 19:18:28.787480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.605 [2024-07-25 19:18:28.787493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.605 [2024-07-25 19:18:28.787523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.605 qpair failed and we were unable to recover it. 00:27:36.605 [2024-07-25 19:18:28.797334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.605 [2024-07-25 19:18:28.797478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.605 [2024-07-25 19:18:28.797505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.605 [2024-07-25 19:18:28.797520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.605 [2024-07-25 19:18:28.797533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.605 [2024-07-25 19:18:28.797562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.605 qpair failed and we were unable to recover it. 00:27:36.605 [2024-07-25 19:18:28.807329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.605 [2024-07-25 19:18:28.807473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.605 [2024-07-25 19:18:28.807500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.605 [2024-07-25 19:18:28.807515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.605 [2024-07-25 19:18:28.807528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.605 [2024-07-25 19:18:28.807556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.605 qpair failed and we were unable to recover it. 00:27:36.605 [2024-07-25 19:18:28.817390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.605 [2024-07-25 19:18:28.817533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.605 [2024-07-25 19:18:28.817560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.605 [2024-07-25 19:18:28.817579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.605 [2024-07-25 19:18:28.817592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.605 [2024-07-25 19:18:28.817621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.605 qpair failed and we were unable to recover it. 00:27:36.606 [2024-07-25 19:18:28.827416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.606 [2024-07-25 19:18:28.827559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.606 [2024-07-25 19:18:28.827592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.606 [2024-07-25 19:18:28.827609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.606 [2024-07-25 19:18:28.827623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.606 [2024-07-25 19:18:28.827652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.606 qpair failed and we were unable to recover it. 00:27:36.606 [2024-07-25 19:18:28.837434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.606 [2024-07-25 19:18:28.837582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.606 [2024-07-25 19:18:28.837609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.606 [2024-07-25 19:18:28.837624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.606 [2024-07-25 19:18:28.837637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.606 [2024-07-25 19:18:28.837666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.606 qpair failed and we were unable to recover it. 00:27:36.606 [2024-07-25 19:18:28.847445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.606 [2024-07-25 19:18:28.847593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.606 [2024-07-25 19:18:28.847619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.606 [2024-07-25 19:18:28.847635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.606 [2024-07-25 19:18:28.847648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.606 [2024-07-25 19:18:28.847676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.606 qpair failed and we were unable to recover it. 00:27:36.606 [2024-07-25 19:18:28.857493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.606 [2024-07-25 19:18:28.857644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.606 [2024-07-25 19:18:28.857670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.606 [2024-07-25 19:18:28.857685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.606 [2024-07-25 19:18:28.857699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.606 [2024-07-25 19:18:28.857727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.606 qpair failed and we were unable to recover it. 00:27:36.606 [2024-07-25 19:18:28.867554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.606 [2024-07-25 19:18:28.867698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.606 [2024-07-25 19:18:28.867723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.606 [2024-07-25 19:18:28.867738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.606 [2024-07-25 19:18:28.867751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.606 [2024-07-25 19:18:28.867779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.606 qpair failed and we were unable to recover it. 00:27:36.606 [2024-07-25 19:18:28.877565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.606 [2024-07-25 19:18:28.877707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.606 [2024-07-25 19:18:28.877733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.606 [2024-07-25 19:18:28.877747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.606 [2024-07-25 19:18:28.877760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.606 [2024-07-25 19:18:28.877789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.606 qpair failed and we were unable to recover it. 00:27:36.606 [2024-07-25 19:18:28.887578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.606 [2024-07-25 19:18:28.887735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.606 [2024-07-25 19:18:28.887762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.606 [2024-07-25 19:18:28.887777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.606 [2024-07-25 19:18:28.887790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.606 [2024-07-25 19:18:28.887818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.606 qpair failed and we were unable to recover it. 00:27:36.606 [2024-07-25 19:18:28.897592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.606 [2024-07-25 19:18:28.897738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.606 [2024-07-25 19:18:28.897763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.606 [2024-07-25 19:18:28.897778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.606 [2024-07-25 19:18:28.897792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.606 [2024-07-25 19:18:28.897820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.606 qpair failed and we were unable to recover it. 00:27:36.606 [2024-07-25 19:18:28.907621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.606 [2024-07-25 19:18:28.907762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.606 [2024-07-25 19:18:28.907788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.606 [2024-07-25 19:18:28.907802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.606 [2024-07-25 19:18:28.907816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.606 [2024-07-25 19:18:28.907844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.606 qpair failed and we were unable to recover it. 00:27:36.606 [2024-07-25 19:18:28.917672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.606 [2024-07-25 19:18:28.917843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.606 [2024-07-25 19:18:28.917876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.606 [2024-07-25 19:18:28.917895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.606 [2024-07-25 19:18:28.917910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.606 [2024-07-25 19:18:28.917941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.606 qpair failed and we were unable to recover it. 00:27:36.606 [2024-07-25 19:18:28.927658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.606 [2024-07-25 19:18:28.927809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.606 [2024-07-25 19:18:28.927836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.606 [2024-07-25 19:18:28.927851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.606 [2024-07-25 19:18:28.927865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.606 [2024-07-25 19:18:28.927893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.606 qpair failed and we were unable to recover it. 00:27:36.606 [2024-07-25 19:18:28.937677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.606 [2024-07-25 19:18:28.937825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.606 [2024-07-25 19:18:28.937851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.606 [2024-07-25 19:18:28.937866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.606 [2024-07-25 19:18:28.937880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.606 [2024-07-25 19:18:28.937909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.606 qpair failed and we were unable to recover it. 00:27:36.606 [2024-07-25 19:18:28.947746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.607 [2024-07-25 19:18:28.947892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.607 [2024-07-25 19:18:28.947919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.607 [2024-07-25 19:18:28.947934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.607 [2024-07-25 19:18:28.947947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.607 [2024-07-25 19:18:28.947975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.607 qpair failed and we were unable to recover it. 00:27:36.607 [2024-07-25 19:18:28.957767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.607 [2024-07-25 19:18:28.957906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.607 [2024-07-25 19:18:28.957932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.607 [2024-07-25 19:18:28.957948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.607 [2024-07-25 19:18:28.957961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.607 [2024-07-25 19:18:28.957995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.607 qpair failed and we were unable to recover it. 00:27:36.607 [2024-07-25 19:18:28.967921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.607 [2024-07-25 19:18:28.968084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.607 [2024-07-25 19:18:28.968117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.607 [2024-07-25 19:18:28.968134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.607 [2024-07-25 19:18:28.968147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.607 [2024-07-25 19:18:28.968175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.607 qpair failed and we were unable to recover it. 00:27:36.607 [2024-07-25 19:18:28.977840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.607 [2024-07-25 19:18:28.978031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.607 [2024-07-25 19:18:28.978057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.607 [2024-07-25 19:18:28.978072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.607 [2024-07-25 19:18:28.978085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.607 [2024-07-25 19:18:28.978120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.607 qpair failed and we were unable to recover it. 00:27:36.607 [2024-07-25 19:18:28.987862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.607 [2024-07-25 19:18:28.988026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.607 [2024-07-25 19:18:28.988052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.607 [2024-07-25 19:18:28.988067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.607 [2024-07-25 19:18:28.988080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.607 [2024-07-25 19:18:28.988115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.607 qpair failed and we were unable to recover it. 00:27:36.607 [2024-07-25 19:18:28.997905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.607 [2024-07-25 19:18:28.998074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.607 [2024-07-25 19:18:28.998099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.607 [2024-07-25 19:18:28.998127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.607 [2024-07-25 19:18:28.998140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.607 [2024-07-25 19:18:28.998169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.607 qpair failed and we were unable to recover it. 00:27:36.607 [2024-07-25 19:18:29.007908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.607 [2024-07-25 19:18:29.008054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.607 [2024-07-25 19:18:29.008084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.607 [2024-07-25 19:18:29.008100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.607 [2024-07-25 19:18:29.008126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.607 [2024-07-25 19:18:29.008156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.607 qpair failed and we were unable to recover it. 00:27:36.607 [2024-07-25 19:18:29.017981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.607 [2024-07-25 19:18:29.018141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.607 [2024-07-25 19:18:29.018167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.607 [2024-07-25 19:18:29.018182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.607 [2024-07-25 19:18:29.018195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.607 [2024-07-25 19:18:29.018224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.607 qpair failed and we were unable to recover it. 00:27:36.607 [2024-07-25 19:18:29.027956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.607 [2024-07-25 19:18:29.028097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.607 [2024-07-25 19:18:29.028130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.607 [2024-07-25 19:18:29.028145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.607 [2024-07-25 19:18:29.028159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.607 [2024-07-25 19:18:29.028187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.607 qpair failed and we were unable to recover it. 00:27:36.607 [2024-07-25 19:18:29.037995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.607 [2024-07-25 19:18:29.038144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.607 [2024-07-25 19:18:29.038170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.607 [2024-07-25 19:18:29.038185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.607 [2024-07-25 19:18:29.038199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.607 [2024-07-25 19:18:29.038227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.607 qpair failed and we were unable to recover it. 00:27:36.607 [2024-07-25 19:18:29.048043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.607 [2024-07-25 19:18:29.048195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.607 [2024-07-25 19:18:29.048221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.607 [2024-07-25 19:18:29.048237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.607 [2024-07-25 19:18:29.048250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.607 [2024-07-25 19:18:29.048284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.607 qpair failed and we were unable to recover it. 00:27:36.607 [2024-07-25 19:18:29.058128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.607 [2024-07-25 19:18:29.058269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.607 [2024-07-25 19:18:29.058294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.607 [2024-07-25 19:18:29.058309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.607 [2024-07-25 19:18:29.058323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.607 [2024-07-25 19:18:29.058351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.607 qpair failed and we were unable to recover it. 00:27:36.607 [2024-07-25 19:18:29.068090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.607 [2024-07-25 19:18:29.068276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.607 [2024-07-25 19:18:29.068302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.607 [2024-07-25 19:18:29.068317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.608 [2024-07-25 19:18:29.068330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.608 [2024-07-25 19:18:29.068359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.608 qpair failed and we were unable to recover it. 00:27:36.867 [2024-07-25 19:18:29.078093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.867 [2024-07-25 19:18:29.078242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.867 [2024-07-25 19:18:29.078271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.867 [2024-07-25 19:18:29.078286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.867 [2024-07-25 19:18:29.078300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.867 [2024-07-25 19:18:29.078329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.867 qpair failed and we were unable to recover it. 00:27:36.867 [2024-07-25 19:18:29.088159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.867 [2024-07-25 19:18:29.088317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.867 [2024-07-25 19:18:29.088345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.867 [2024-07-25 19:18:29.088361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.867 [2024-07-25 19:18:29.088375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.867 [2024-07-25 19:18:29.088405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.867 qpair failed and we were unable to recover it. 00:27:36.867 [2024-07-25 19:18:29.098144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.867 [2024-07-25 19:18:29.098289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.867 [2024-07-25 19:18:29.098321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.867 [2024-07-25 19:18:29.098337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.867 [2024-07-25 19:18:29.098351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.867 [2024-07-25 19:18:29.098380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.867 qpair failed and we were unable to recover it. 00:27:36.867 [2024-07-25 19:18:29.108186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.867 [2024-07-25 19:18:29.108331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.867 [2024-07-25 19:18:29.108357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.867 [2024-07-25 19:18:29.108372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.867 [2024-07-25 19:18:29.108386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.867 [2024-07-25 19:18:29.108414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.867 qpair failed and we were unable to recover it. 00:27:36.867 [2024-07-25 19:18:29.118251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.867 [2024-07-25 19:18:29.118396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.867 [2024-07-25 19:18:29.118422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.867 [2024-07-25 19:18:29.118437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.867 [2024-07-25 19:18:29.118450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.867 [2024-07-25 19:18:29.118478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.867 qpair failed and we were unable to recover it. 00:27:36.867 [2024-07-25 19:18:29.128267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.867 [2024-07-25 19:18:29.128417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.867 [2024-07-25 19:18:29.128444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.867 [2024-07-25 19:18:29.128458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.867 [2024-07-25 19:18:29.128471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.867 [2024-07-25 19:18:29.128500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.867 qpair failed and we were unable to recover it. 00:27:36.867 [2024-07-25 19:18:29.138273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.867 [2024-07-25 19:18:29.138423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.867 [2024-07-25 19:18:29.138449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.867 [2024-07-25 19:18:29.138464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.867 [2024-07-25 19:18:29.138483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.867 [2024-07-25 19:18:29.138513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.867 qpair failed and we were unable to recover it. 00:27:36.867 [2024-07-25 19:18:29.148343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.867 [2024-07-25 19:18:29.148493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.867 [2024-07-25 19:18:29.148522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.867 [2024-07-25 19:18:29.148541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.867 [2024-07-25 19:18:29.148555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.867 [2024-07-25 19:18:29.148585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.867 qpair failed and we were unable to recover it. 00:27:36.867 [2024-07-25 19:18:29.158332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.867 [2024-07-25 19:18:29.158495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.867 [2024-07-25 19:18:29.158522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.867 [2024-07-25 19:18:29.158537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.867 [2024-07-25 19:18:29.158551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.867 [2024-07-25 19:18:29.158579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.867 qpair failed and we were unable to recover it. 00:27:36.867 [2024-07-25 19:18:29.168415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.867 [2024-07-25 19:18:29.168588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.867 [2024-07-25 19:18:29.168614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.867 [2024-07-25 19:18:29.168629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.867 [2024-07-25 19:18:29.168642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.867 [2024-07-25 19:18:29.168671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.867 qpair failed and we were unable to recover it. 00:27:36.867 [2024-07-25 19:18:29.178393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.867 [2024-07-25 19:18:29.178545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.867 [2024-07-25 19:18:29.178571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.867 [2024-07-25 19:18:29.178586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.867 [2024-07-25 19:18:29.178600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.867 [2024-07-25 19:18:29.178628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.867 qpair failed and we were unable to recover it. 00:27:36.867 [2024-07-25 19:18:29.188399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.868 [2024-07-25 19:18:29.188549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.868 [2024-07-25 19:18:29.188575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.868 [2024-07-25 19:18:29.188590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.868 [2024-07-25 19:18:29.188603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.868 [2024-07-25 19:18:29.188632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.868 qpair failed and we were unable to recover it. 00:27:36.868 [2024-07-25 19:18:29.198431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.868 [2024-07-25 19:18:29.198589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.868 [2024-07-25 19:18:29.198615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.868 [2024-07-25 19:18:29.198629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.868 [2024-07-25 19:18:29.198643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.868 [2024-07-25 19:18:29.198672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.868 qpair failed and we were unable to recover it. 00:27:36.868 [2024-07-25 19:18:29.208467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.868 [2024-07-25 19:18:29.208632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.868 [2024-07-25 19:18:29.208658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.868 [2024-07-25 19:18:29.208677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.868 [2024-07-25 19:18:29.208691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.868 [2024-07-25 19:18:29.208720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.868 qpair failed and we were unable to recover it. 00:27:36.868 [2024-07-25 19:18:29.218498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.868 [2024-07-25 19:18:29.218647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.868 [2024-07-25 19:18:29.218673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.868 [2024-07-25 19:18:29.218691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.868 [2024-07-25 19:18:29.218705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.868 [2024-07-25 19:18:29.218734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.868 qpair failed and we were unable to recover it. 00:27:36.868 [2024-07-25 19:18:29.228503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.868 [2024-07-25 19:18:29.228649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.868 [2024-07-25 19:18:29.228675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.868 [2024-07-25 19:18:29.228690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.868 [2024-07-25 19:18:29.228709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.868 [2024-07-25 19:18:29.228738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.868 qpair failed and we were unable to recover it. 00:27:36.868 [2024-07-25 19:18:29.238569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.868 [2024-07-25 19:18:29.238760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.868 [2024-07-25 19:18:29.238787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.868 [2024-07-25 19:18:29.238802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.868 [2024-07-25 19:18:29.238816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.868 [2024-07-25 19:18:29.238844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.868 qpair failed and we were unable to recover it. 00:27:36.868 [2024-07-25 19:18:29.248670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.868 [2024-07-25 19:18:29.248820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.868 [2024-07-25 19:18:29.248846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.868 [2024-07-25 19:18:29.248861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.868 [2024-07-25 19:18:29.248874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.868 [2024-07-25 19:18:29.248905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.868 qpair failed and we were unable to recover it. 00:27:36.868 [2024-07-25 19:18:29.258600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.868 [2024-07-25 19:18:29.258749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.868 [2024-07-25 19:18:29.258774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.868 [2024-07-25 19:18:29.258789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.868 [2024-07-25 19:18:29.258802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.868 [2024-07-25 19:18:29.258831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.868 qpair failed and we were unable to recover it. 00:27:36.868 [2024-07-25 19:18:29.268621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.868 [2024-07-25 19:18:29.268765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.868 [2024-07-25 19:18:29.268790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.868 [2024-07-25 19:18:29.268806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.868 [2024-07-25 19:18:29.268819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.868 [2024-07-25 19:18:29.268847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.868 qpair failed and we were unable to recover it. 00:27:36.868 [2024-07-25 19:18:29.278641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.868 [2024-07-25 19:18:29.278789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.868 [2024-07-25 19:18:29.278816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.868 [2024-07-25 19:18:29.278830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.868 [2024-07-25 19:18:29.278844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.868 [2024-07-25 19:18:29.278873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.868 qpair failed and we were unable to recover it. 00:27:36.868 [2024-07-25 19:18:29.288673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.868 [2024-07-25 19:18:29.288825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.868 [2024-07-25 19:18:29.288851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.868 [2024-07-25 19:18:29.288866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.868 [2024-07-25 19:18:29.288878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.868 [2024-07-25 19:18:29.288907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.868 qpair failed and we were unable to recover it. 00:27:36.868 [2024-07-25 19:18:29.298708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.868 [2024-07-25 19:18:29.298852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.868 [2024-07-25 19:18:29.298879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.868 [2024-07-25 19:18:29.298894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.868 [2024-07-25 19:18:29.298908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.868 [2024-07-25 19:18:29.298936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.868 qpair failed and we were unable to recover it. 00:27:36.868 [2024-07-25 19:18:29.308764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.868 [2024-07-25 19:18:29.308912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.868 [2024-07-25 19:18:29.308938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.868 [2024-07-25 19:18:29.308953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.868 [2024-07-25 19:18:29.308968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.868 [2024-07-25 19:18:29.308996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.868 qpair failed and we were unable to recover it. 00:27:36.868 [2024-07-25 19:18:29.318750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.868 [2024-07-25 19:18:29.318896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.869 [2024-07-25 19:18:29.318922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.869 [2024-07-25 19:18:29.318937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.869 [2024-07-25 19:18:29.318958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.869 [2024-07-25 19:18:29.318987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.869 qpair failed and we were unable to recover it. 00:27:36.869 [2024-07-25 19:18:29.328903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.869 [2024-07-25 19:18:29.329067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.869 [2024-07-25 19:18:29.329093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.869 [2024-07-25 19:18:29.329117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.869 [2024-07-25 19:18:29.329131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:36.869 [2024-07-25 19:18:29.329160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.869 qpair failed and we were unable to recover it. 00:27:37.128 [2024-07-25 19:18:29.338871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.128 [2024-07-25 19:18:29.339051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.128 [2024-07-25 19:18:29.339079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.128 [2024-07-25 19:18:29.339095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.128 [2024-07-25 19:18:29.339117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.128 [2024-07-25 19:18:29.339148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.128 qpair failed and we were unable to recover it. 00:27:37.128 [2024-07-25 19:18:29.348961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.128 [2024-07-25 19:18:29.349114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.128 [2024-07-25 19:18:29.349142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.128 [2024-07-25 19:18:29.349157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.128 [2024-07-25 19:18:29.349171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.128 [2024-07-25 19:18:29.349201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.128 qpair failed and we were unable to recover it. 00:27:37.128 [2024-07-25 19:18:29.358904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.128 [2024-07-25 19:18:29.359048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.128 [2024-07-25 19:18:29.359074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.128 [2024-07-25 19:18:29.359089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.128 [2024-07-25 19:18:29.359108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.128 [2024-07-25 19:18:29.359140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.128 qpair failed and we were unable to recover it. 00:27:37.128 [2024-07-25 19:18:29.368937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.128 [2024-07-25 19:18:29.369084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.128 [2024-07-25 19:18:29.369117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.128 [2024-07-25 19:18:29.369133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.128 [2024-07-25 19:18:29.369146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.128 [2024-07-25 19:18:29.369175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.128 qpair failed and we were unable to recover it. 00:27:37.128 [2024-07-25 19:18:29.378927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.128 [2024-07-25 19:18:29.379075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.128 [2024-07-25 19:18:29.379107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.128 [2024-07-25 19:18:29.379125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.128 [2024-07-25 19:18:29.379138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.128 [2024-07-25 19:18:29.379168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.128 qpair failed and we were unable to recover it. 00:27:37.128 [2024-07-25 19:18:29.388943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.128 [2024-07-25 19:18:29.389089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.128 [2024-07-25 19:18:29.389123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.128 [2024-07-25 19:18:29.389139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.128 [2024-07-25 19:18:29.389153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.128 [2024-07-25 19:18:29.389181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.128 qpair failed and we were unable to recover it. 00:27:37.128 [2024-07-25 19:18:29.398984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.128 [2024-07-25 19:18:29.399134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.128 [2024-07-25 19:18:29.399160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.128 [2024-07-25 19:18:29.399176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.128 [2024-07-25 19:18:29.399189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.128 [2024-07-25 19:18:29.399217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.128 qpair failed and we were unable to recover it. 00:27:37.128 [2024-07-25 19:18:29.409021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.128 [2024-07-25 19:18:29.409177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.128 [2024-07-25 19:18:29.409203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.128 [2024-07-25 19:18:29.409224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.128 [2024-07-25 19:18:29.409238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.128 [2024-07-25 19:18:29.409267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.128 qpair failed and we were unable to recover it. 00:27:37.128 [2024-07-25 19:18:29.419038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.128 [2024-07-25 19:18:29.419190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.128 [2024-07-25 19:18:29.419216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.128 [2024-07-25 19:18:29.419231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.128 [2024-07-25 19:18:29.419245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.128 [2024-07-25 19:18:29.419275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.128 qpair failed and we were unable to recover it. 00:27:37.128 [2024-07-25 19:18:29.429114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.128 [2024-07-25 19:18:29.429314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.128 [2024-07-25 19:18:29.429346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.128 [2024-07-25 19:18:29.429369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.128 [2024-07-25 19:18:29.429387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.128 [2024-07-25 19:18:29.429423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.128 qpair failed and we were unable to recover it. 00:27:37.128 [2024-07-25 19:18:29.439090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.128 [2024-07-25 19:18:29.439244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.128 [2024-07-25 19:18:29.439271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.128 [2024-07-25 19:18:29.439286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.128 [2024-07-25 19:18:29.439299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.128 [2024-07-25 19:18:29.439328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.128 qpair failed and we were unable to recover it. 00:27:37.128 [2024-07-25 19:18:29.449161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.128 [2024-07-25 19:18:29.449306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.128 [2024-07-25 19:18:29.449332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.128 [2024-07-25 19:18:29.449348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.129 [2024-07-25 19:18:29.449361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.129 [2024-07-25 19:18:29.449390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.129 qpair failed and we were unable to recover it. 00:27:37.129 [2024-07-25 19:18:29.459207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.129 [2024-07-25 19:18:29.459391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.129 [2024-07-25 19:18:29.459417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.129 [2024-07-25 19:18:29.459432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.129 [2024-07-25 19:18:29.459445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.129 [2024-07-25 19:18:29.459473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.129 qpair failed and we were unable to recover it. 00:27:37.129 [2024-07-25 19:18:29.469216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.129 [2024-07-25 19:18:29.469427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.129 [2024-07-25 19:18:29.469453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.129 [2024-07-25 19:18:29.469468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.129 [2024-07-25 19:18:29.469482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.129 [2024-07-25 19:18:29.469509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.129 qpair failed and we were unable to recover it. 00:27:37.129 [2024-07-25 19:18:29.479246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.129 [2024-07-25 19:18:29.479419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.129 [2024-07-25 19:18:29.479445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.129 [2024-07-25 19:18:29.479460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.129 [2024-07-25 19:18:29.479472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.129 [2024-07-25 19:18:29.479501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.129 qpair failed and we were unable to recover it. 00:27:37.129 [2024-07-25 19:18:29.489262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.129 [2024-07-25 19:18:29.489460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.129 [2024-07-25 19:18:29.489486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.129 [2024-07-25 19:18:29.489501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.129 [2024-07-25 19:18:29.489514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.129 [2024-07-25 19:18:29.489542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.129 qpair failed and we were unable to recover it. 00:27:37.129 [2024-07-25 19:18:29.499334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.129 [2024-07-25 19:18:29.499505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.129 [2024-07-25 19:18:29.499530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.129 [2024-07-25 19:18:29.499551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.129 [2024-07-25 19:18:29.499565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.129 [2024-07-25 19:18:29.499594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.129 qpair failed and we were unable to recover it. 00:27:37.129 [2024-07-25 19:18:29.509302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.129 [2024-07-25 19:18:29.509444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.129 [2024-07-25 19:18:29.509470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.129 [2024-07-25 19:18:29.509486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.129 [2024-07-25 19:18:29.509499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.129 [2024-07-25 19:18:29.509528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.129 qpair failed and we were unable to recover it. 00:27:37.129 [2024-07-25 19:18:29.519313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.129 [2024-07-25 19:18:29.519456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.129 [2024-07-25 19:18:29.519482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.129 [2024-07-25 19:18:29.519497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.129 [2024-07-25 19:18:29.519510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.129 [2024-07-25 19:18:29.519539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.129 qpair failed and we were unable to recover it. 00:27:37.129 [2024-07-25 19:18:29.529401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.129 [2024-07-25 19:18:29.529550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.129 [2024-07-25 19:18:29.529575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.129 [2024-07-25 19:18:29.529590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.129 [2024-07-25 19:18:29.529603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.129 [2024-07-25 19:18:29.529632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.129 qpair failed and we were unable to recover it. 00:27:37.129 [2024-07-25 19:18:29.539416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.129 [2024-07-25 19:18:29.539561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.129 [2024-07-25 19:18:29.539587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.129 [2024-07-25 19:18:29.539602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.129 [2024-07-25 19:18:29.539615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.129 [2024-07-25 19:18:29.539646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.129 qpair failed and we were unable to recover it. 00:27:37.129 [2024-07-25 19:18:29.549430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.129 [2024-07-25 19:18:29.549574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.129 [2024-07-25 19:18:29.549599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.129 [2024-07-25 19:18:29.549613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.129 [2024-07-25 19:18:29.549625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.129 [2024-07-25 19:18:29.549653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.129 qpair failed and we were unable to recover it. 00:27:37.129 [2024-07-25 19:18:29.559427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.129 [2024-07-25 19:18:29.559574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.129 [2024-07-25 19:18:29.559600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.129 [2024-07-25 19:18:29.559615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.129 [2024-07-25 19:18:29.559628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.129 [2024-07-25 19:18:29.559657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.129 qpair failed and we were unable to recover it. 00:27:37.129 [2024-07-25 19:18:29.569466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.129 [2024-07-25 19:18:29.569616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.129 [2024-07-25 19:18:29.569641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.129 [2024-07-25 19:18:29.569655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.129 [2024-07-25 19:18:29.569669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.129 [2024-07-25 19:18:29.569697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.129 qpair failed and we were unable to recover it. 00:27:37.129 [2024-07-25 19:18:29.579629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.129 [2024-07-25 19:18:29.579785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.129 [2024-07-25 19:18:29.579810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.129 [2024-07-25 19:18:29.579826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.129 [2024-07-25 19:18:29.579839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.129 [2024-07-25 19:18:29.579867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.130 qpair failed and we were unable to recover it. 00:27:37.130 [2024-07-25 19:18:29.589525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.130 [2024-07-25 19:18:29.589668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.130 [2024-07-25 19:18:29.589694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.130 [2024-07-25 19:18:29.589715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.130 [2024-07-25 19:18:29.589731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.130 [2024-07-25 19:18:29.589760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.130 qpair failed and we were unable to recover it. 00:27:37.389 [2024-07-25 19:18:29.599567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.389 [2024-07-25 19:18:29.599720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.389 [2024-07-25 19:18:29.599748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.389 [2024-07-25 19:18:29.599764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.389 [2024-07-25 19:18:29.599778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.389 [2024-07-25 19:18:29.599807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.389 qpair failed and we were unable to recover it. 00:27:37.389 [2024-07-25 19:18:29.609596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.389 [2024-07-25 19:18:29.609745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.389 [2024-07-25 19:18:29.609772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.389 [2024-07-25 19:18:29.609788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.389 [2024-07-25 19:18:29.609800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.389 [2024-07-25 19:18:29.609830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.389 qpair failed and we were unable to recover it. 00:27:37.389 [2024-07-25 19:18:29.619617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.389 [2024-07-25 19:18:29.619757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.389 [2024-07-25 19:18:29.619784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.389 [2024-07-25 19:18:29.619799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.389 [2024-07-25 19:18:29.619813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.389 [2024-07-25 19:18:29.619841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.389 qpair failed and we were unable to recover it. 00:27:37.389 [2024-07-25 19:18:29.629669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.389 [2024-07-25 19:18:29.629820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.389 [2024-07-25 19:18:29.629850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.389 [2024-07-25 19:18:29.629866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.389 [2024-07-25 19:18:29.629880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.389 [2024-07-25 19:18:29.629909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.389 qpair failed and we were unable to recover it. 00:27:37.389 [2024-07-25 19:18:29.639662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.389 [2024-07-25 19:18:29.639799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.389 [2024-07-25 19:18:29.639825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.389 [2024-07-25 19:18:29.639840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.389 [2024-07-25 19:18:29.639854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.389 [2024-07-25 19:18:29.639882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.389 qpair failed and we were unable to recover it. 00:27:37.389 [2024-07-25 19:18:29.649760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.389 [2024-07-25 19:18:29.649982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.389 [2024-07-25 19:18:29.650010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.389 [2024-07-25 19:18:29.650025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.389 [2024-07-25 19:18:29.650042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.389 [2024-07-25 19:18:29.650072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.389 qpair failed and we were unable to recover it. 00:27:37.389 [2024-07-25 19:18:29.659806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.389 [2024-07-25 19:18:29.659950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.389 [2024-07-25 19:18:29.659975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.389 [2024-07-25 19:18:29.659990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.390 [2024-07-25 19:18:29.660003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.390 [2024-07-25 19:18:29.660032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.390 qpair failed and we were unable to recover it. 00:27:37.390 [2024-07-25 19:18:29.669774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.390 [2024-07-25 19:18:29.669914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.390 [2024-07-25 19:18:29.669941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.390 [2024-07-25 19:18:29.669956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.390 [2024-07-25 19:18:29.669969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.390 [2024-07-25 19:18:29.669997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.390 qpair failed and we were unable to recover it. 00:27:37.390 [2024-07-25 19:18:29.679779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.390 [2024-07-25 19:18:29.679921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.390 [2024-07-25 19:18:29.679952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.390 [2024-07-25 19:18:29.679967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.390 [2024-07-25 19:18:29.679980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.390 [2024-07-25 19:18:29.680009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.390 qpair failed and we were unable to recover it. 00:27:37.390 [2024-07-25 19:18:29.689827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.390 [2024-07-25 19:18:29.689978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.390 [2024-07-25 19:18:29.690004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.390 [2024-07-25 19:18:29.690018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.390 [2024-07-25 19:18:29.690031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.390 [2024-07-25 19:18:29.690060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.390 qpair failed and we were unable to recover it. 00:27:37.390 [2024-07-25 19:18:29.699871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.390 [2024-07-25 19:18:29.700021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.390 [2024-07-25 19:18:29.700047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.390 [2024-07-25 19:18:29.700062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.390 [2024-07-25 19:18:29.700076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.390 [2024-07-25 19:18:29.700111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.390 qpair failed and we were unable to recover it. 00:27:37.390 [2024-07-25 19:18:29.709864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.390 [2024-07-25 19:18:29.710025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.390 [2024-07-25 19:18:29.710051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.390 [2024-07-25 19:18:29.710065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.390 [2024-07-25 19:18:29.710079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.390 [2024-07-25 19:18:29.710118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.390 qpair failed and we were unable to recover it. 00:27:37.390 [2024-07-25 19:18:29.719918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.390 [2024-07-25 19:18:29.720071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.390 [2024-07-25 19:18:29.720097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.390 [2024-07-25 19:18:29.720120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.390 [2024-07-25 19:18:29.720134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.390 [2024-07-25 19:18:29.720169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.390 qpair failed and we were unable to recover it. 00:27:37.390 [2024-07-25 19:18:29.729995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.390 [2024-07-25 19:18:29.730172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.390 [2024-07-25 19:18:29.730198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.390 [2024-07-25 19:18:29.730213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.390 [2024-07-25 19:18:29.730228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.390 [2024-07-25 19:18:29.730258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.390 qpair failed and we were unable to recover it. 00:27:37.390 [2024-07-25 19:18:29.740022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.390 [2024-07-25 19:18:29.740188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.390 [2024-07-25 19:18:29.740214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.390 [2024-07-25 19:18:29.740229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.390 [2024-07-25 19:18:29.740242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.390 [2024-07-25 19:18:29.740270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.390 qpair failed and we were unable to recover it. 00:27:37.390 [2024-07-25 19:18:29.749987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.390 [2024-07-25 19:18:29.750129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.390 [2024-07-25 19:18:29.750155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.390 [2024-07-25 19:18:29.750170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.390 [2024-07-25 19:18:29.750183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.390 [2024-07-25 19:18:29.750212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.390 qpair failed and we were unable to recover it. 00:27:37.390 [2024-07-25 19:18:29.760016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.390 [2024-07-25 19:18:29.760162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.390 [2024-07-25 19:18:29.760187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.390 [2024-07-25 19:18:29.760202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.390 [2024-07-25 19:18:29.760215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.390 [2024-07-25 19:18:29.760244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.390 qpair failed and we were unable to recover it. 00:27:37.390 [2024-07-25 19:18:29.770096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.390 [2024-07-25 19:18:29.770257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.390 [2024-07-25 19:18:29.770289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.390 [2024-07-25 19:18:29.770304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.390 [2024-07-25 19:18:29.770317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.390 [2024-07-25 19:18:29.770346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.390 qpair failed and we were unable to recover it. 00:27:37.390 [2024-07-25 19:18:29.780071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.390 [2024-07-25 19:18:29.780222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.390 [2024-07-25 19:18:29.780247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.390 [2024-07-25 19:18:29.780262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.390 [2024-07-25 19:18:29.780275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.390 [2024-07-25 19:18:29.780306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.390 qpair failed and we were unable to recover it. 00:27:37.390 [2024-07-25 19:18:29.790088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.390 [2024-07-25 19:18:29.790236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.390 [2024-07-25 19:18:29.790262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.390 [2024-07-25 19:18:29.790277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.390 [2024-07-25 19:18:29.790290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.390 [2024-07-25 19:18:29.790319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.391 qpair failed and we were unable to recover it. 00:27:37.391 [2024-07-25 19:18:29.800135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.391 [2024-07-25 19:18:29.800276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.391 [2024-07-25 19:18:29.800302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.391 [2024-07-25 19:18:29.800317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.391 [2024-07-25 19:18:29.800331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.391 [2024-07-25 19:18:29.800359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.391 qpair failed and we were unable to recover it. 00:27:37.391 [2024-07-25 19:18:29.810190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.391 [2024-07-25 19:18:29.810348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.391 [2024-07-25 19:18:29.810374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.391 [2024-07-25 19:18:29.810389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.391 [2024-07-25 19:18:29.810402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.391 [2024-07-25 19:18:29.810437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.391 qpair failed and we were unable to recover it. 00:27:37.391 [2024-07-25 19:18:29.820238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.391 [2024-07-25 19:18:29.820383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.391 [2024-07-25 19:18:29.820409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.391 [2024-07-25 19:18:29.820424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.391 [2024-07-25 19:18:29.820437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.391 [2024-07-25 19:18:29.820466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.391 qpair failed and we were unable to recover it. 00:27:37.391 [2024-07-25 19:18:29.830276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.391 [2024-07-25 19:18:29.830466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.391 [2024-07-25 19:18:29.830492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.391 [2024-07-25 19:18:29.830511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.391 [2024-07-25 19:18:29.830526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.391 [2024-07-25 19:18:29.830554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.391 qpair failed and we were unable to recover it. 00:27:37.391 [2024-07-25 19:18:29.840274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.391 [2024-07-25 19:18:29.840412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.391 [2024-07-25 19:18:29.840438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.391 [2024-07-25 19:18:29.840453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.391 [2024-07-25 19:18:29.840466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.391 [2024-07-25 19:18:29.840495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.391 qpair failed and we were unable to recover it. 00:27:37.391 [2024-07-25 19:18:29.850284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.391 [2024-07-25 19:18:29.850425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.391 [2024-07-25 19:18:29.850451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.391 [2024-07-25 19:18:29.850466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.391 [2024-07-25 19:18:29.850479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.391 [2024-07-25 19:18:29.850507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.391 qpair failed and we were unable to recover it. 00:27:37.650 [2024-07-25 19:18:29.860346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.650 [2024-07-25 19:18:29.860492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.650 [2024-07-25 19:18:29.860525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.650 [2024-07-25 19:18:29.860543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.650 [2024-07-25 19:18:29.860556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.650 [2024-07-25 19:18:29.860587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.650 qpair failed and we were unable to recover it. 00:27:37.650 [2024-07-25 19:18:29.870386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.650 [2024-07-25 19:18:29.870581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.650 [2024-07-25 19:18:29.870609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.650 [2024-07-25 19:18:29.870624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.650 [2024-07-25 19:18:29.870638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.650 [2024-07-25 19:18:29.870667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.650 qpair failed and we were unable to recover it. 00:27:37.650 [2024-07-25 19:18:29.880376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.650 [2024-07-25 19:18:29.880548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.650 [2024-07-25 19:18:29.880575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.650 [2024-07-25 19:18:29.880590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.650 [2024-07-25 19:18:29.880607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.650 [2024-07-25 19:18:29.880639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.650 qpair failed and we were unable to recover it. 00:27:37.650 [2024-07-25 19:18:29.890428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.650 [2024-07-25 19:18:29.890576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.650 [2024-07-25 19:18:29.890602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.650 [2024-07-25 19:18:29.890617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.650 [2024-07-25 19:18:29.890631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.650 [2024-07-25 19:18:29.890659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.650 qpair failed and we were unable to recover it. 00:27:37.650 [2024-07-25 19:18:29.900447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.650 [2024-07-25 19:18:29.900613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.650 [2024-07-25 19:18:29.900638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.650 [2024-07-25 19:18:29.900653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.650 [2024-07-25 19:18:29.900667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.650 [2024-07-25 19:18:29.900701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.650 qpair failed and we were unable to recover it. 00:27:37.650 [2024-07-25 19:18:29.910478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.650 [2024-07-25 19:18:29.910644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.650 [2024-07-25 19:18:29.910670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.650 [2024-07-25 19:18:29.910685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.650 [2024-07-25 19:18:29.910701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.650 [2024-07-25 19:18:29.910731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.650 qpair failed and we were unable to recover it. 00:27:37.650 [2024-07-25 19:18:29.920476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.650 [2024-07-25 19:18:29.920641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.650 [2024-07-25 19:18:29.920668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.650 [2024-07-25 19:18:29.920683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.650 [2024-07-25 19:18:29.920697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.650 [2024-07-25 19:18:29.920726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.650 qpair failed and we were unable to recover it. 00:27:37.650 [2024-07-25 19:18:29.930538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.651 [2024-07-25 19:18:29.930740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.651 [2024-07-25 19:18:29.930767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.651 [2024-07-25 19:18:29.930783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.651 [2024-07-25 19:18:29.930800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.651 [2024-07-25 19:18:29.930831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.651 qpair failed and we were unable to recover it. 00:27:37.651 [2024-07-25 19:18:29.940520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.651 [2024-07-25 19:18:29.940663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.651 [2024-07-25 19:18:29.940689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.651 [2024-07-25 19:18:29.940705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.651 [2024-07-25 19:18:29.940718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.651 [2024-07-25 19:18:29.940746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.651 qpair failed and we were unable to recover it. 00:27:37.651 [2024-07-25 19:18:29.950596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.651 [2024-07-25 19:18:29.950740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.651 [2024-07-25 19:18:29.950771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.651 [2024-07-25 19:18:29.950787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.651 [2024-07-25 19:18:29.950800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.651 [2024-07-25 19:18:29.950828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.651 qpair failed and we were unable to recover it. 00:27:37.651 [2024-07-25 19:18:29.960622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.651 [2024-07-25 19:18:29.960769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.651 [2024-07-25 19:18:29.960795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.651 [2024-07-25 19:18:29.960810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.651 [2024-07-25 19:18:29.960824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.651 [2024-07-25 19:18:29.960852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.651 qpair failed and we were unable to recover it. 00:27:37.651 [2024-07-25 19:18:29.970627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.651 [2024-07-25 19:18:29.970775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.651 [2024-07-25 19:18:29.970801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.651 [2024-07-25 19:18:29.970815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.651 [2024-07-25 19:18:29.970829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.651 [2024-07-25 19:18:29.970857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.651 qpair failed and we were unable to recover it. 00:27:37.651 [2024-07-25 19:18:29.980679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.651 [2024-07-25 19:18:29.980853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.651 [2024-07-25 19:18:29.980879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.651 [2024-07-25 19:18:29.980894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.651 [2024-07-25 19:18:29.980907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.651 [2024-07-25 19:18:29.980935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.651 qpair failed and we were unable to recover it. 00:27:37.651 [2024-07-25 19:18:29.990672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.651 [2024-07-25 19:18:29.990814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.651 [2024-07-25 19:18:29.990840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.651 [2024-07-25 19:18:29.990857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.651 [2024-07-25 19:18:29.990879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.651 [2024-07-25 19:18:29.990908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.651 qpair failed and we were unable to recover it. 00:27:37.651 [2024-07-25 19:18:30.000743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.651 [2024-07-25 19:18:30.000908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.651 [2024-07-25 19:18:30.000935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.651 [2024-07-25 19:18:30.000950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.651 [2024-07-25 19:18:30.000962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.651 [2024-07-25 19:18:30.000991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.651 qpair failed and we were unable to recover it. 00:27:37.651 [2024-07-25 19:18:30.010780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.651 [2024-07-25 19:18:30.010932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.651 [2024-07-25 19:18:30.010960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.651 [2024-07-25 19:18:30.010976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.651 [2024-07-25 19:18:30.010988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.651 [2024-07-25 19:18:30.011019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.651 qpair failed and we were unable to recover it. 00:27:37.651 [2024-07-25 19:18:30.020928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.651 [2024-07-25 19:18:30.021088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.651 [2024-07-25 19:18:30.021134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.651 [2024-07-25 19:18:30.021152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.651 [2024-07-25 19:18:30.021166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.651 [2024-07-25 19:18:30.021199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.651 qpair failed and we were unable to recover it. 00:27:37.651 [2024-07-25 19:18:30.030840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.651 [2024-07-25 19:18:30.030993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.651 [2024-07-25 19:18:30.031021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.651 [2024-07-25 19:18:30.031037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.651 [2024-07-25 19:18:30.031050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.651 [2024-07-25 19:18:30.031080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.651 qpair failed and we were unable to recover it. 00:27:37.651 [2024-07-25 19:18:30.040873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.651 [2024-07-25 19:18:30.041024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.651 [2024-07-25 19:18:30.041051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.651 [2024-07-25 19:18:30.041066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.651 [2024-07-25 19:18:30.041080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.651 [2024-07-25 19:18:30.041117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.651 qpair failed and we were unable to recover it. 00:27:37.651 [2024-07-25 19:18:30.050894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.651 [2024-07-25 19:18:30.051083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.651 [2024-07-25 19:18:30.051118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.651 [2024-07-25 19:18:30.051135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.651 [2024-07-25 19:18:30.051148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.651 [2024-07-25 19:18:30.051178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.651 qpair failed and we were unable to recover it. 00:27:37.651 [2024-07-25 19:18:30.060888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.651 [2024-07-25 19:18:30.061034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.652 [2024-07-25 19:18:30.061060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.652 [2024-07-25 19:18:30.061076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.652 [2024-07-25 19:18:30.061089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.652 [2024-07-25 19:18:30.061126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.652 qpair failed and we were unable to recover it. 00:27:37.652 [2024-07-25 19:18:30.070966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.652 [2024-07-25 19:18:30.071130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.652 [2024-07-25 19:18:30.071156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.652 [2024-07-25 19:18:30.071171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.652 [2024-07-25 19:18:30.071185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.652 [2024-07-25 19:18:30.071214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.652 qpair failed and we were unable to recover it. 00:27:37.652 [2024-07-25 19:18:30.080950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.652 [2024-07-25 19:18:30.081088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.652 [2024-07-25 19:18:30.081121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.652 [2024-07-25 19:18:30.081137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.652 [2024-07-25 19:18:30.081157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.652 [2024-07-25 19:18:30.081188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.652 qpair failed and we were unable to recover it. 00:27:37.652 [2024-07-25 19:18:30.090997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.652 [2024-07-25 19:18:30.091173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.652 [2024-07-25 19:18:30.091199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.652 [2024-07-25 19:18:30.091214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.652 [2024-07-25 19:18:30.091228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.652 [2024-07-25 19:18:30.091256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.652 qpair failed and we were unable to recover it. 00:27:37.652 [2024-07-25 19:18:30.101018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.652 [2024-07-25 19:18:30.101179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.652 [2024-07-25 19:18:30.101205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.652 [2024-07-25 19:18:30.101220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.652 [2024-07-25 19:18:30.101234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.652 [2024-07-25 19:18:30.101263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.652 qpair failed and we were unable to recover it. 00:27:37.652 [2024-07-25 19:18:30.111031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.652 [2024-07-25 19:18:30.111186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.652 [2024-07-25 19:18:30.111213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.652 [2024-07-25 19:18:30.111228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.652 [2024-07-25 19:18:30.111241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.652 [2024-07-25 19:18:30.111270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.652 qpair failed and we were unable to recover it. 00:27:37.912 [2024-07-25 19:18:30.121171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.912 [2024-07-25 19:18:30.121332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.912 [2024-07-25 19:18:30.121360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.912 [2024-07-25 19:18:30.121376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.912 [2024-07-25 19:18:30.121389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.912 [2024-07-25 19:18:30.121419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.912 qpair failed and we were unable to recover it. 00:27:37.912 [2024-07-25 19:18:30.131091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.912 [2024-07-25 19:18:30.131281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.912 [2024-07-25 19:18:30.131309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.912 [2024-07-25 19:18:30.131325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.912 [2024-07-25 19:18:30.131339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.912 [2024-07-25 19:18:30.131368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.912 qpair failed and we were unable to recover it. 00:27:37.912 [2024-07-25 19:18:30.141131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.912 [2024-07-25 19:18:30.141276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.912 [2024-07-25 19:18:30.141303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.912 [2024-07-25 19:18:30.141318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.912 [2024-07-25 19:18:30.141332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.912 [2024-07-25 19:18:30.141361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.912 qpair failed and we were unable to recover it. 00:27:37.912 [2024-07-25 19:18:30.151136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.912 [2024-07-25 19:18:30.151282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.912 [2024-07-25 19:18:30.151308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.912 [2024-07-25 19:18:30.151323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.912 [2024-07-25 19:18:30.151336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.912 [2024-07-25 19:18:30.151365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.912 qpair failed and we were unable to recover it. 00:27:37.912 [2024-07-25 19:18:30.161188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.912 [2024-07-25 19:18:30.161338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.912 [2024-07-25 19:18:30.161365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.912 [2024-07-25 19:18:30.161380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.912 [2024-07-25 19:18:30.161394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.912 [2024-07-25 19:18:30.161424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.912 qpair failed and we were unable to recover it. 00:27:37.912 [2024-07-25 19:18:30.171207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.912 [2024-07-25 19:18:30.171395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.912 [2024-07-25 19:18:30.171421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.912 [2024-07-25 19:18:30.171442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.912 [2024-07-25 19:18:30.171457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.912 [2024-07-25 19:18:30.171487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.912 qpair failed and we were unable to recover it. 00:27:37.912 [2024-07-25 19:18:30.181237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.912 [2024-07-25 19:18:30.181391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.912 [2024-07-25 19:18:30.181418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.912 [2024-07-25 19:18:30.181432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.912 [2024-07-25 19:18:30.181446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.912 [2024-07-25 19:18:30.181474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.912 qpair failed and we were unable to recover it. 00:27:37.912 [2024-07-25 19:18:30.191294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.912 [2024-07-25 19:18:30.191437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.912 [2024-07-25 19:18:30.191463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.912 [2024-07-25 19:18:30.191477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.912 [2024-07-25 19:18:30.191490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.912 [2024-07-25 19:18:30.191519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.912 qpair failed and we were unable to recover it. 00:27:37.912 [2024-07-25 19:18:30.201310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.912 [2024-07-25 19:18:30.201457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.912 [2024-07-25 19:18:30.201483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.912 [2024-07-25 19:18:30.201498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.912 [2024-07-25 19:18:30.201511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.912 [2024-07-25 19:18:30.201540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.912 qpair failed and we were unable to recover it. 00:27:37.912 [2024-07-25 19:18:30.211343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.912 [2024-07-25 19:18:30.211486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.912 [2024-07-25 19:18:30.211512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.912 [2024-07-25 19:18:30.211527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.912 [2024-07-25 19:18:30.211541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.913 [2024-07-25 19:18:30.211569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.913 qpair failed and we were unable to recover it. 00:27:37.913 [2024-07-25 19:18:30.221384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.913 [2024-07-25 19:18:30.221568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.913 [2024-07-25 19:18:30.221594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.913 [2024-07-25 19:18:30.221609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.913 [2024-07-25 19:18:30.221623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.913 [2024-07-25 19:18:30.221651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.913 qpair failed and we were unable to recover it. 00:27:37.913 [2024-07-25 19:18:30.231419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.913 [2024-07-25 19:18:30.231595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.913 [2024-07-25 19:18:30.231622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.913 [2024-07-25 19:18:30.231638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.913 [2024-07-25 19:18:30.231651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.913 [2024-07-25 19:18:30.231680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.913 qpair failed and we were unable to recover it. 00:27:37.913 [2024-07-25 19:18:30.241445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.913 [2024-07-25 19:18:30.241628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.913 [2024-07-25 19:18:30.241654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.913 [2024-07-25 19:18:30.241669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.913 [2024-07-25 19:18:30.241682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.913 [2024-07-25 19:18:30.241711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.913 qpair failed and we were unable to recover it. 00:27:37.913 [2024-07-25 19:18:30.251432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.913 [2024-07-25 19:18:30.251592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.913 [2024-07-25 19:18:30.251618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.913 [2024-07-25 19:18:30.251634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.913 [2024-07-25 19:18:30.251647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.913 [2024-07-25 19:18:30.251675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.913 qpair failed and we were unable to recover it. 00:27:37.913 [2024-07-25 19:18:30.261453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.913 [2024-07-25 19:18:30.261603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.913 [2024-07-25 19:18:30.261640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.913 [2024-07-25 19:18:30.261660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.913 [2024-07-25 19:18:30.261674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.913 [2024-07-25 19:18:30.261703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.913 qpair failed and we were unable to recover it. 00:27:37.913 [2024-07-25 19:18:30.271517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.913 [2024-07-25 19:18:30.271657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.913 [2024-07-25 19:18:30.271683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.913 [2024-07-25 19:18:30.271698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.913 [2024-07-25 19:18:30.271712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.913 [2024-07-25 19:18:30.271740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.913 qpair failed and we were unable to recover it. 00:27:37.913 [2024-07-25 19:18:30.281576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.913 [2024-07-25 19:18:30.281757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.913 [2024-07-25 19:18:30.281799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.913 [2024-07-25 19:18:30.281815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.913 [2024-07-25 19:18:30.281828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.913 [2024-07-25 19:18:30.281870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.913 qpair failed and we were unable to recover it. 00:27:37.913 [2024-07-25 19:18:30.291677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.913 [2024-07-25 19:18:30.291826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.913 [2024-07-25 19:18:30.291857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.913 [2024-07-25 19:18:30.291873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.913 [2024-07-25 19:18:30.291887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.913 [2024-07-25 19:18:30.291915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.913 qpair failed and we were unable to recover it. 00:27:37.913 [2024-07-25 19:18:30.301587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.913 [2024-07-25 19:18:30.301734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.913 [2024-07-25 19:18:30.301762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.913 [2024-07-25 19:18:30.301778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.913 [2024-07-25 19:18:30.301791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.913 [2024-07-25 19:18:30.301836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.913 qpair failed and we were unable to recover it. 00:27:37.913 [2024-07-25 19:18:30.311627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.913 [2024-07-25 19:18:30.311813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.913 [2024-07-25 19:18:30.311855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.913 [2024-07-25 19:18:30.311870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.913 [2024-07-25 19:18:30.311883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.913 [2024-07-25 19:18:30.311925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.913 qpair failed and we were unable to recover it. 00:27:37.913 [2024-07-25 19:18:30.321652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.913 [2024-07-25 19:18:30.321799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.913 [2024-07-25 19:18:30.321826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.913 [2024-07-25 19:18:30.321842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.913 [2024-07-25 19:18:30.321855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.913 [2024-07-25 19:18:30.321898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.913 qpair failed and we were unable to recover it. 00:27:37.913 [2024-07-25 19:18:30.331667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.913 [2024-07-25 19:18:30.331813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.913 [2024-07-25 19:18:30.331840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.913 [2024-07-25 19:18:30.331856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.913 [2024-07-25 19:18:30.331869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.913 [2024-07-25 19:18:30.331896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.913 qpair failed and we were unable to recover it. 00:27:37.913 [2024-07-25 19:18:30.341691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.913 [2024-07-25 19:18:30.341840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.913 [2024-07-25 19:18:30.341867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.913 [2024-07-25 19:18:30.341884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.913 [2024-07-25 19:18:30.341898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.914 [2024-07-25 19:18:30.341927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.914 qpair failed and we were unable to recover it. 00:27:37.914 [2024-07-25 19:18:30.351715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.914 [2024-07-25 19:18:30.351866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.914 [2024-07-25 19:18:30.351894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.914 [2024-07-25 19:18:30.351915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.914 [2024-07-25 19:18:30.351930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.914 [2024-07-25 19:18:30.351960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.914 qpair failed and we were unable to recover it. 00:27:37.914 [2024-07-25 19:18:30.361724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.914 [2024-07-25 19:18:30.361873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.914 [2024-07-25 19:18:30.361900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.914 [2024-07-25 19:18:30.361916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.914 [2024-07-25 19:18:30.361930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.914 [2024-07-25 19:18:30.361959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.914 qpair failed and we were unable to recover it. 00:27:37.914 [2024-07-25 19:18:30.371786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.914 [2024-07-25 19:18:30.371937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.914 [2024-07-25 19:18:30.371963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.914 [2024-07-25 19:18:30.371979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.914 [2024-07-25 19:18:30.371993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:37.914 [2024-07-25 19:18:30.372022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.914 qpair failed and we were unable to recover it. 00:27:38.173 [2024-07-25 19:18:30.381816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.173 [2024-07-25 19:18:30.381968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.173 [2024-07-25 19:18:30.381998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.173 [2024-07-25 19:18:30.382015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.173 [2024-07-25 19:18:30.382028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.173 [2024-07-25 19:18:30.382059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.173 qpair failed and we were unable to recover it. 00:27:38.173 [2024-07-25 19:18:30.391821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.173 [2024-07-25 19:18:30.391970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.173 [2024-07-25 19:18:30.391999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.173 [2024-07-25 19:18:30.392016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.173 [2024-07-25 19:18:30.392030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.173 [2024-07-25 19:18:30.392060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.173 qpair failed and we were unable to recover it. 00:27:38.173 [2024-07-25 19:18:30.401843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.173 [2024-07-25 19:18:30.402013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.173 [2024-07-25 19:18:30.402041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.173 [2024-07-25 19:18:30.402057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.173 [2024-07-25 19:18:30.402071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.173 [2024-07-25 19:18:30.402099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.173 qpair failed and we were unable to recover it. 00:27:38.173 [2024-07-25 19:18:30.411897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.173 [2024-07-25 19:18:30.412047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.173 [2024-07-25 19:18:30.412073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.173 [2024-07-25 19:18:30.412088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.173 [2024-07-25 19:18:30.412107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.173 [2024-07-25 19:18:30.412140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.173 qpair failed and we were unable to recover it. 00:27:38.173 [2024-07-25 19:18:30.421922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.173 [2024-07-25 19:18:30.422065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.173 [2024-07-25 19:18:30.422092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.173 [2024-07-25 19:18:30.422115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.173 [2024-07-25 19:18:30.422130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.173 [2024-07-25 19:18:30.422160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.173 qpair failed and we were unable to recover it. 00:27:38.173 [2024-07-25 19:18:30.431968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.173 [2024-07-25 19:18:30.432153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.173 [2024-07-25 19:18:30.432181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.173 [2024-07-25 19:18:30.432197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.173 [2024-07-25 19:18:30.432211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.173 [2024-07-25 19:18:30.432240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.173 qpair failed and we were unable to recover it. 00:27:38.174 [2024-07-25 19:18:30.441965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.174 [2024-07-25 19:18:30.442118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.174 [2024-07-25 19:18:30.442150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.174 [2024-07-25 19:18:30.442167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.174 [2024-07-25 19:18:30.442180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.174 [2024-07-25 19:18:30.442210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.174 qpair failed and we were unable to recover it. 00:27:38.174 [2024-07-25 19:18:30.452000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.174 [2024-07-25 19:18:30.452163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.174 [2024-07-25 19:18:30.452190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.174 [2024-07-25 19:18:30.452205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.174 [2024-07-25 19:18:30.452219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.174 [2024-07-25 19:18:30.452249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.174 qpair failed and we were unable to recover it. 00:27:38.174 [2024-07-25 19:18:30.462018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.174 [2024-07-25 19:18:30.462173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.174 [2024-07-25 19:18:30.462200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.174 [2024-07-25 19:18:30.462215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.174 [2024-07-25 19:18:30.462228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.174 [2024-07-25 19:18:30.462258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.174 qpair failed and we were unable to recover it. 00:27:38.174 [2024-07-25 19:18:30.472117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.174 [2024-07-25 19:18:30.472275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.174 [2024-07-25 19:18:30.472301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.174 [2024-07-25 19:18:30.472317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.174 [2024-07-25 19:18:30.472330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.174 [2024-07-25 19:18:30.472368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.174 qpair failed and we were unable to recover it. 00:27:38.174 [2024-07-25 19:18:30.482082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.174 [2024-07-25 19:18:30.482249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.174 [2024-07-25 19:18:30.482276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.174 [2024-07-25 19:18:30.482292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.174 [2024-07-25 19:18:30.482305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.174 [2024-07-25 19:18:30.482340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.174 qpair failed and we were unable to recover it. 00:27:38.174 [2024-07-25 19:18:30.492149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.174 [2024-07-25 19:18:30.492333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.174 [2024-07-25 19:18:30.492360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.174 [2024-07-25 19:18:30.492375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.174 [2024-07-25 19:18:30.492389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.174 [2024-07-25 19:18:30.492423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.174 qpair failed and we were unable to recover it. 00:27:38.174 [2024-07-25 19:18:30.502162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.174 [2024-07-25 19:18:30.502307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.174 [2024-07-25 19:18:30.502334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.174 [2024-07-25 19:18:30.502351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.174 [2024-07-25 19:18:30.502364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.174 [2024-07-25 19:18:30.502394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.174 qpair failed and we were unable to recover it. 00:27:38.174 [2024-07-25 19:18:30.512175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.174 [2024-07-25 19:18:30.512321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.174 [2024-07-25 19:18:30.512348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.174 [2024-07-25 19:18:30.512364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.174 [2024-07-25 19:18:30.512378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.174 [2024-07-25 19:18:30.512407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.174 qpair failed and we were unable to recover it. 00:27:38.174 [2024-07-25 19:18:30.522192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.174 [2024-07-25 19:18:30.522339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.174 [2024-07-25 19:18:30.522366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.174 [2024-07-25 19:18:30.522382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.174 [2024-07-25 19:18:30.522394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.174 [2024-07-25 19:18:30.522424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.174 qpair failed and we were unable to recover it. 00:27:38.174 [2024-07-25 19:18:30.532228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.174 [2024-07-25 19:18:30.532376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.174 [2024-07-25 19:18:30.532407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.174 [2024-07-25 19:18:30.532423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.174 [2024-07-25 19:18:30.532437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.174 [2024-07-25 19:18:30.532467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.174 qpair failed and we were unable to recover it. 00:27:38.174 [2024-07-25 19:18:30.542271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.174 [2024-07-25 19:18:30.542467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.174 [2024-07-25 19:18:30.542494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.174 [2024-07-25 19:18:30.542510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.174 [2024-07-25 19:18:30.542523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.174 [2024-07-25 19:18:30.542552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.174 qpair failed and we were unable to recover it. 00:27:38.174 [2024-07-25 19:18:30.552349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.174 [2024-07-25 19:18:30.552556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.174 [2024-07-25 19:18:30.552584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.174 [2024-07-25 19:18:30.552617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.174 [2024-07-25 19:18:30.552630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.174 [2024-07-25 19:18:30.552673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.174 qpair failed and we were unable to recover it. 00:27:38.174 [2024-07-25 19:18:30.562323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.174 [2024-07-25 19:18:30.562477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.174 [2024-07-25 19:18:30.562505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.174 [2024-07-25 19:18:30.562521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.174 [2024-07-25 19:18:30.562534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.174 [2024-07-25 19:18:30.562563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.174 qpair failed and we were unable to recover it. 00:27:38.174 [2024-07-25 19:18:30.572379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.174 [2024-07-25 19:18:30.572523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.175 [2024-07-25 19:18:30.572550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.175 [2024-07-25 19:18:30.572566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.175 [2024-07-25 19:18:30.572579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.175 [2024-07-25 19:18:30.572614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.175 qpair failed and we were unable to recover it. 00:27:38.175 [2024-07-25 19:18:30.582389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.175 [2024-07-25 19:18:30.582545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.175 [2024-07-25 19:18:30.582572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.175 [2024-07-25 19:18:30.582587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.175 [2024-07-25 19:18:30.582601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.175 [2024-07-25 19:18:30.582645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.175 qpair failed and we were unable to recover it. 00:27:38.175 [2024-07-25 19:18:30.592409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.175 [2024-07-25 19:18:30.592563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.175 [2024-07-25 19:18:30.592591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.175 [2024-07-25 19:18:30.592606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.175 [2024-07-25 19:18:30.592620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.175 [2024-07-25 19:18:30.592650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.175 qpair failed and we were unable to recover it. 00:27:38.175 [2024-07-25 19:18:30.602558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.175 [2024-07-25 19:18:30.602712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.175 [2024-07-25 19:18:30.602738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.175 [2024-07-25 19:18:30.602754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.175 [2024-07-25 19:18:30.602768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.175 [2024-07-25 19:18:30.602797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.175 qpair failed and we were unable to recover it. 00:27:38.175 [2024-07-25 19:18:30.612592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.175 [2024-07-25 19:18:30.612739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.175 [2024-07-25 19:18:30.612765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.175 [2024-07-25 19:18:30.612782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.175 [2024-07-25 19:18:30.612795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.175 [2024-07-25 19:18:30.612825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.175 qpair failed and we were unable to recover it. 00:27:38.175 [2024-07-25 19:18:30.622505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.175 [2024-07-25 19:18:30.622649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.175 [2024-07-25 19:18:30.622680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.175 [2024-07-25 19:18:30.622697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.175 [2024-07-25 19:18:30.622711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.175 [2024-07-25 19:18:30.622741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.175 qpair failed and we were unable to recover it. 00:27:38.175 [2024-07-25 19:18:30.632603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.175 [2024-07-25 19:18:30.632745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.175 [2024-07-25 19:18:30.632774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.175 [2024-07-25 19:18:30.632791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.175 [2024-07-25 19:18:30.632804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.175 [2024-07-25 19:18:30.632833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.175 qpair failed and we were unable to recover it. 00:27:38.175 [2024-07-25 19:18:30.642598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.175 [2024-07-25 19:18:30.642753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.175 [2024-07-25 19:18:30.642784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.175 [2024-07-25 19:18:30.642800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.175 [2024-07-25 19:18:30.642814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.175 [2024-07-25 19:18:30.642851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.175 qpair failed and we were unable to recover it. 00:27:38.434 [2024-07-25 19:18:30.652600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.434 [2024-07-25 19:18:30.652787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.434 [2024-07-25 19:18:30.652816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.434 [2024-07-25 19:18:30.652832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.434 [2024-07-25 19:18:30.652848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.434 [2024-07-25 19:18:30.652878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-07-25 19:18:30.662654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.435 [2024-07-25 19:18:30.662799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.435 [2024-07-25 19:18:30.662827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.435 [2024-07-25 19:18:30.662843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.435 [2024-07-25 19:18:30.662857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.435 [2024-07-25 19:18:30.662892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-07-25 19:18:30.672660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.435 [2024-07-25 19:18:30.672801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.435 [2024-07-25 19:18:30.672829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.435 [2024-07-25 19:18:30.672844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.435 [2024-07-25 19:18:30.672859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.435 [2024-07-25 19:18:30.672889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-07-25 19:18:30.682665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.435 [2024-07-25 19:18:30.682813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.435 [2024-07-25 19:18:30.682841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.435 [2024-07-25 19:18:30.682857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.435 [2024-07-25 19:18:30.682881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.435 [2024-07-25 19:18:30.682911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-07-25 19:18:30.692756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.435 [2024-07-25 19:18:30.692910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.435 [2024-07-25 19:18:30.692936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.435 [2024-07-25 19:18:30.692952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.435 [2024-07-25 19:18:30.692966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.435 [2024-07-25 19:18:30.692995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-07-25 19:18:30.702726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.435 [2024-07-25 19:18:30.702908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.435 [2024-07-25 19:18:30.702936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.435 [2024-07-25 19:18:30.702951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.435 [2024-07-25 19:18:30.702965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.435 [2024-07-25 19:18:30.702995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-07-25 19:18:30.712765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.435 [2024-07-25 19:18:30.712910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.435 [2024-07-25 19:18:30.712945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.435 [2024-07-25 19:18:30.712962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.435 [2024-07-25 19:18:30.712977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.435 [2024-07-25 19:18:30.713007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-07-25 19:18:30.722762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.435 [2024-07-25 19:18:30.722916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.435 [2024-07-25 19:18:30.722943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.435 [2024-07-25 19:18:30.722959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.435 [2024-07-25 19:18:30.722973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.435 [2024-07-25 19:18:30.723003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-07-25 19:18:30.732852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.435 [2024-07-25 19:18:30.733026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.435 [2024-07-25 19:18:30.733052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.435 [2024-07-25 19:18:30.733068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.435 [2024-07-25 19:18:30.733083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.435 [2024-07-25 19:18:30.733130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-07-25 19:18:30.742838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.435 [2024-07-25 19:18:30.743035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.435 [2024-07-25 19:18:30.743062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.435 [2024-07-25 19:18:30.743078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.435 [2024-07-25 19:18:30.743093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.435 [2024-07-25 19:18:30.743131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-07-25 19:18:30.752981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.435 [2024-07-25 19:18:30.753167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.435 [2024-07-25 19:18:30.753194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.435 [2024-07-25 19:18:30.753210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.435 [2024-07-25 19:18:30.753232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.435 [2024-07-25 19:18:30.753262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-07-25 19:18:30.763023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.435 [2024-07-25 19:18:30.763179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.435 [2024-07-25 19:18:30.763206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.435 [2024-07-25 19:18:30.763221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.435 [2024-07-25 19:18:30.763235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.435 [2024-07-25 19:18:30.763265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-07-25 19:18:30.773079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.435 [2024-07-25 19:18:30.773252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.435 [2024-07-25 19:18:30.773279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.435 [2024-07-25 19:18:30.773295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.435 [2024-07-25 19:18:30.773309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.435 [2024-07-25 19:18:30.773338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-07-25 19:18:30.782982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.435 [2024-07-25 19:18:30.783165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.435 [2024-07-25 19:18:30.783191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.435 [2024-07-25 19:18:30.783207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.435 [2024-07-25 19:18:30.783221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.435 [2024-07-25 19:18:30.783250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-07-25 19:18:30.792956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.436 [2024-07-25 19:18:30.793127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.436 [2024-07-25 19:18:30.793154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.436 [2024-07-25 19:18:30.793170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.436 [2024-07-25 19:18:30.793184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.436 [2024-07-25 19:18:30.793213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-07-25 19:18:30.803033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.436 [2024-07-25 19:18:30.803192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.436 [2024-07-25 19:18:30.803220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.436 [2024-07-25 19:18:30.803235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.436 [2024-07-25 19:18:30.803249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.436 [2024-07-25 19:18:30.803278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-07-25 19:18:30.813072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.436 [2024-07-25 19:18:30.813232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.436 [2024-07-25 19:18:30.813259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.436 [2024-07-25 19:18:30.813274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.436 [2024-07-25 19:18:30.813288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.436 [2024-07-25 19:18:30.813317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-07-25 19:18:30.823063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.436 [2024-07-25 19:18:30.823226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.436 [2024-07-25 19:18:30.823252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.436 [2024-07-25 19:18:30.823268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.436 [2024-07-25 19:18:30.823283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.436 [2024-07-25 19:18:30.823312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-07-25 19:18:30.833079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.436 [2024-07-25 19:18:30.833244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.436 [2024-07-25 19:18:30.833271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.436 [2024-07-25 19:18:30.833287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.436 [2024-07-25 19:18:30.833302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.436 [2024-07-25 19:18:30.833331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-07-25 19:18:30.843116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.436 [2024-07-25 19:18:30.843268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.436 [2024-07-25 19:18:30.843295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.436 [2024-07-25 19:18:30.843311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.436 [2024-07-25 19:18:30.843330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.436 [2024-07-25 19:18:30.843360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-07-25 19:18:30.853167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.436 [2024-07-25 19:18:30.853331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.436 [2024-07-25 19:18:30.853357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.436 [2024-07-25 19:18:30.853373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.436 [2024-07-25 19:18:30.853387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.436 [2024-07-25 19:18:30.853416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-07-25 19:18:30.863163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.436 [2024-07-25 19:18:30.863305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.436 [2024-07-25 19:18:30.863331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.436 [2024-07-25 19:18:30.863347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.436 [2024-07-25 19:18:30.863361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.436 [2024-07-25 19:18:30.863390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-07-25 19:18:30.873241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.436 [2024-07-25 19:18:30.873466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.436 [2024-07-25 19:18:30.873507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.436 [2024-07-25 19:18:30.873522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.436 [2024-07-25 19:18:30.873536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.436 [2024-07-25 19:18:30.873580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-07-25 19:18:30.883318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.436 [2024-07-25 19:18:30.883472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.436 [2024-07-25 19:18:30.883499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.436 [2024-07-25 19:18:30.883515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.436 [2024-07-25 19:18:30.883529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.436 [2024-07-25 19:18:30.883558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-07-25 19:18:30.893278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.436 [2024-07-25 19:18:30.893436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.436 [2024-07-25 19:18:30.893462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.436 [2024-07-25 19:18:30.893477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.436 [2024-07-25 19:18:30.893491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.436 [2024-07-25 19:18:30.893520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-07-25 19:18:30.903303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.436 [2024-07-25 19:18:30.903480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.436 [2024-07-25 19:18:30.903509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.436 [2024-07-25 19:18:30.903525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.436 [2024-07-25 19:18:30.903540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.436 [2024-07-25 19:18:30.903571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.696 [2024-07-25 19:18:30.913311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.696 [2024-07-25 19:18:30.913495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.696 [2024-07-25 19:18:30.913524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.696 [2024-07-25 19:18:30.913540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.696 [2024-07-25 19:18:30.913555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.696 [2024-07-25 19:18:30.913585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.696 qpair failed and we were unable to recover it. 00:27:38.696 [2024-07-25 19:18:30.923340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.696 [2024-07-25 19:18:30.923495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.696 [2024-07-25 19:18:30.923525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.696 [2024-07-25 19:18:30.923541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.696 [2024-07-25 19:18:30.923555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.696 [2024-07-25 19:18:30.923585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.696 qpair failed and we were unable to recover it. 00:27:38.696 [2024-07-25 19:18:30.933444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.696 [2024-07-25 19:18:30.933609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.696 [2024-07-25 19:18:30.933636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.696 [2024-07-25 19:18:30.933657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.696 [2024-07-25 19:18:30.933672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.696 [2024-07-25 19:18:30.933703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.696 qpair failed and we were unable to recover it. 00:27:38.696 [2024-07-25 19:18:30.943376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.696 [2024-07-25 19:18:30.943525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.696 [2024-07-25 19:18:30.943551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.696 [2024-07-25 19:18:30.943567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.696 [2024-07-25 19:18:30.943581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.696 [2024-07-25 19:18:30.943610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.696 qpair failed and we were unable to recover it. 00:27:38.696 [2024-07-25 19:18:30.953422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.696 [2024-07-25 19:18:30.953587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.696 [2024-07-25 19:18:30.953615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.696 [2024-07-25 19:18:30.953636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.696 [2024-07-25 19:18:30.953651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.696 [2024-07-25 19:18:30.953681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.696 qpair failed and we were unable to recover it. 00:27:38.696 [2024-07-25 19:18:30.963437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.696 [2024-07-25 19:18:30.963586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.696 [2024-07-25 19:18:30.963612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.696 [2024-07-25 19:18:30.963627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.696 [2024-07-25 19:18:30.963642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.696 [2024-07-25 19:18:30.963672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.696 qpair failed and we were unable to recover it. 00:27:38.696 [2024-07-25 19:18:30.973486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.696 [2024-07-25 19:18:30.973671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.696 [2024-07-25 19:18:30.973700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.696 [2024-07-25 19:18:30.973716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.696 [2024-07-25 19:18:30.973730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.696 [2024-07-25 19:18:30.973760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.696 qpair failed and we were unable to recover it. 00:27:38.696 [2024-07-25 19:18:30.983516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.696 [2024-07-25 19:18:30.983689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.696 [2024-07-25 19:18:30.983717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.696 [2024-07-25 19:18:30.983732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.696 [2024-07-25 19:18:30.983747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.696 [2024-07-25 19:18:30.983776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.696 qpair failed and we were unable to recover it. 00:27:38.696 [2024-07-25 19:18:30.993538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.696 [2024-07-25 19:18:30.993698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.696 [2024-07-25 19:18:30.993727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.696 [2024-07-25 19:18:30.993743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.696 [2024-07-25 19:18:30.993758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.696 [2024-07-25 19:18:30.993787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.696 qpair failed and we were unable to recover it. 00:27:38.696 [2024-07-25 19:18:31.003587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.696 [2024-07-25 19:18:31.003765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.696 [2024-07-25 19:18:31.003792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.696 [2024-07-25 19:18:31.003808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.696 [2024-07-25 19:18:31.003822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.696 [2024-07-25 19:18:31.003852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.696 qpair failed and we were unable to recover it. 00:27:38.696 [2024-07-25 19:18:31.013584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.696 [2024-07-25 19:18:31.013742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.696 [2024-07-25 19:18:31.013769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.696 [2024-07-25 19:18:31.013785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.696 [2024-07-25 19:18:31.013799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.696 [2024-07-25 19:18:31.013828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.696 qpair failed and we were unable to recover it. 00:27:38.696 [2024-07-25 19:18:31.023611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.697 [2024-07-25 19:18:31.023767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.697 [2024-07-25 19:18:31.023793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.697 [2024-07-25 19:18:31.023816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.697 [2024-07-25 19:18:31.023831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.697 [2024-07-25 19:18:31.023861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.697 qpair failed and we were unable to recover it. 00:27:38.697 [2024-07-25 19:18:31.033659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.697 [2024-07-25 19:18:31.033842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.697 [2024-07-25 19:18:31.033869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.697 [2024-07-25 19:18:31.033884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.697 [2024-07-25 19:18:31.033898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.697 [2024-07-25 19:18:31.033928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.697 qpair failed and we were unable to recover it. 00:27:38.697 [2024-07-25 19:18:31.043670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.697 [2024-07-25 19:18:31.043853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.697 [2024-07-25 19:18:31.043879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.697 [2024-07-25 19:18:31.043895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.697 [2024-07-25 19:18:31.043910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.697 [2024-07-25 19:18:31.043939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.697 qpair failed and we were unable to recover it. 00:27:38.697 [2024-07-25 19:18:31.053709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.697 [2024-07-25 19:18:31.053864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.697 [2024-07-25 19:18:31.053890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.697 [2024-07-25 19:18:31.053906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.697 [2024-07-25 19:18:31.053920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.697 [2024-07-25 19:18:31.053949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.697 qpair failed and we were unable to recover it. 00:27:38.697 [2024-07-25 19:18:31.063776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.697 [2024-07-25 19:18:31.063960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.697 [2024-07-25 19:18:31.063986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.697 [2024-07-25 19:18:31.064002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.697 [2024-07-25 19:18:31.064016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.697 [2024-07-25 19:18:31.064046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.697 qpair failed and we were unable to recover it. 00:27:38.697 [2024-07-25 19:18:31.073742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.697 [2024-07-25 19:18:31.073896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.697 [2024-07-25 19:18:31.073923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.697 [2024-07-25 19:18:31.073939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.697 [2024-07-25 19:18:31.073953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.697 [2024-07-25 19:18:31.073982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.697 qpair failed and we were unable to recover it. 00:27:38.697 [2024-07-25 19:18:31.083881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.697 [2024-07-25 19:18:31.084041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.697 [2024-07-25 19:18:31.084068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.697 [2024-07-25 19:18:31.084083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.697 [2024-07-25 19:18:31.084097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.697 [2024-07-25 19:18:31.084134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.697 qpair failed and we were unable to recover it. 00:27:38.697 [2024-07-25 19:18:31.093832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.697 [2024-07-25 19:18:31.093998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.697 [2024-07-25 19:18:31.094024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.697 [2024-07-25 19:18:31.094040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.697 [2024-07-25 19:18:31.094057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.697 [2024-07-25 19:18:31.094087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.697 qpair failed and we were unable to recover it. 00:27:38.697 [2024-07-25 19:18:31.103843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.697 [2024-07-25 19:18:31.104043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.697 [2024-07-25 19:18:31.104070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.697 [2024-07-25 19:18:31.104085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.697 [2024-07-25 19:18:31.104100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.697 [2024-07-25 19:18:31.104136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.697 qpair failed and we were unable to recover it. 00:27:38.697 [2024-07-25 19:18:31.113871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.697 [2024-07-25 19:18:31.114017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.697 [2024-07-25 19:18:31.114043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.697 [2024-07-25 19:18:31.114065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.697 [2024-07-25 19:18:31.114080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.697 [2024-07-25 19:18:31.114116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.697 qpair failed and we were unable to recover it. 00:27:38.697 [2024-07-25 19:18:31.123894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.697 [2024-07-25 19:18:31.124087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.697 [2024-07-25 19:18:31.124121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.697 [2024-07-25 19:18:31.124137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.697 [2024-07-25 19:18:31.124152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.697 [2024-07-25 19:18:31.124181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.697 qpair failed and we were unable to recover it. 00:27:38.697 [2024-07-25 19:18:31.133967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.697 [2024-07-25 19:18:31.134157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.697 [2024-07-25 19:18:31.134184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.697 [2024-07-25 19:18:31.134199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.697 [2024-07-25 19:18:31.134213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.697 [2024-07-25 19:18:31.134242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.697 qpair failed and we were unable to recover it. 00:27:38.697 [2024-07-25 19:18:31.143987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.697 [2024-07-25 19:18:31.144164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.697 [2024-07-25 19:18:31.144202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.697 [2024-07-25 19:18:31.144219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.697 [2024-07-25 19:18:31.144233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.697 [2024-07-25 19:18:31.144264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.697 qpair failed and we were unable to recover it. 00:27:38.697 [2024-07-25 19:18:31.153973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.697 [2024-07-25 19:18:31.154125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.697 [2024-07-25 19:18:31.154152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.697 [2024-07-25 19:18:31.154168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.698 [2024-07-25 19:18:31.154182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.698 [2024-07-25 19:18:31.154211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.698 qpair failed and we were unable to recover it. 00:27:38.698 [2024-07-25 19:18:31.164019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.698 [2024-07-25 19:18:31.164210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.698 [2024-07-25 19:18:31.164239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.698 [2024-07-25 19:18:31.164255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.698 [2024-07-25 19:18:31.164270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.698 [2024-07-25 19:18:31.164300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.698 qpair failed and we were unable to recover it. 00:27:38.957 [2024-07-25 19:18:31.174065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.957 [2024-07-25 19:18:31.174230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.957 [2024-07-25 19:18:31.174259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.957 [2024-07-25 19:18:31.174275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.957 [2024-07-25 19:18:31.174290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.957 [2024-07-25 19:18:31.174321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.957 qpair failed and we were unable to recover it. 00:27:38.957 [2024-07-25 19:18:31.184076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.957 [2024-07-25 19:18:31.184247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.957 [2024-07-25 19:18:31.184274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.957 [2024-07-25 19:18:31.184291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.957 [2024-07-25 19:18:31.184305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.957 [2024-07-25 19:18:31.184335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.957 qpair failed and we were unable to recover it. 00:27:38.957 [2024-07-25 19:18:31.194135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.957 [2024-07-25 19:18:31.194320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.957 [2024-07-25 19:18:31.194347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.957 [2024-07-25 19:18:31.194363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.957 [2024-07-25 19:18:31.194378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.957 [2024-07-25 19:18:31.194407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.957 qpair failed and we were unable to recover it. 00:27:38.957 [2024-07-25 19:18:31.204200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.957 [2024-07-25 19:18:31.204352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.957 [2024-07-25 19:18:31.204384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.957 [2024-07-25 19:18:31.204401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.957 [2024-07-25 19:18:31.204415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.957 [2024-07-25 19:18:31.204444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.957 qpair failed and we were unable to recover it. 00:27:38.957 [2024-07-25 19:18:31.214176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.957 [2024-07-25 19:18:31.214334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.957 [2024-07-25 19:18:31.214361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.957 [2024-07-25 19:18:31.214376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.957 [2024-07-25 19:18:31.214391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.957 [2024-07-25 19:18:31.214420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.957 qpair failed and we were unable to recover it. 00:27:38.957 [2024-07-25 19:18:31.224277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.957 [2024-07-25 19:18:31.224432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.957 [2024-07-25 19:18:31.224458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.957 [2024-07-25 19:18:31.224474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.957 [2024-07-25 19:18:31.224488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.957 [2024-07-25 19:18:31.224517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.957 qpair failed and we were unable to recover it. 00:27:38.957 [2024-07-25 19:18:31.234314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.957 [2024-07-25 19:18:31.234459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.957 [2024-07-25 19:18:31.234486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.957 [2024-07-25 19:18:31.234502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.957 [2024-07-25 19:18:31.234516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.957 [2024-07-25 19:18:31.234560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.957 qpair failed and we were unable to recover it. 00:27:38.958 [2024-07-25 19:18:31.244259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.958 [2024-07-25 19:18:31.244454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.958 [2024-07-25 19:18:31.244481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.958 [2024-07-25 19:18:31.244496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.958 [2024-07-25 19:18:31.244511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.958 [2024-07-25 19:18:31.244541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.958 qpair failed and we were unable to recover it. 00:27:38.958 [2024-07-25 19:18:31.254324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.958 [2024-07-25 19:18:31.254482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.958 [2024-07-25 19:18:31.254508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.958 [2024-07-25 19:18:31.254524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.958 [2024-07-25 19:18:31.254538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.958 [2024-07-25 19:18:31.254568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.958 qpair failed and we were unable to recover it. 00:27:38.958 [2024-07-25 19:18:31.264315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.958 [2024-07-25 19:18:31.264513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.958 [2024-07-25 19:18:31.264553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.958 [2024-07-25 19:18:31.264568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.958 [2024-07-25 19:18:31.264582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.958 [2024-07-25 19:18:31.264625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.958 qpair failed and we were unable to recover it. 00:27:38.958 [2024-07-25 19:18:31.274371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.958 [2024-07-25 19:18:31.274519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.958 [2024-07-25 19:18:31.274546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.958 [2024-07-25 19:18:31.274561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.958 [2024-07-25 19:18:31.274575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.958 [2024-07-25 19:18:31.274604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.958 qpair failed and we were unable to recover it. 00:27:38.958 [2024-07-25 19:18:31.284357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.958 [2024-07-25 19:18:31.284505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.958 [2024-07-25 19:18:31.284531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.958 [2024-07-25 19:18:31.284547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.958 [2024-07-25 19:18:31.284561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.958 [2024-07-25 19:18:31.284590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.958 qpair failed and we were unable to recover it. 00:27:38.958 [2024-07-25 19:18:31.294488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.958 [2024-07-25 19:18:31.294659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.958 [2024-07-25 19:18:31.294691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.958 [2024-07-25 19:18:31.294707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.958 [2024-07-25 19:18:31.294721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.958 [2024-07-25 19:18:31.294750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.958 qpair failed and we were unable to recover it. 00:27:38.958 [2024-07-25 19:18:31.304477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.958 [2024-07-25 19:18:31.304646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.958 [2024-07-25 19:18:31.304673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.958 [2024-07-25 19:18:31.304689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.958 [2024-07-25 19:18:31.304703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.958 [2024-07-25 19:18:31.304732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.958 qpair failed and we were unable to recover it. 00:27:38.958 [2024-07-25 19:18:31.314443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.958 [2024-07-25 19:18:31.314593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.958 [2024-07-25 19:18:31.314620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.958 [2024-07-25 19:18:31.314636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.958 [2024-07-25 19:18:31.314650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.958 [2024-07-25 19:18:31.314695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.958 qpair failed and we were unable to recover it. 00:27:38.958 [2024-07-25 19:18:31.324517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.958 [2024-07-25 19:18:31.324665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.958 [2024-07-25 19:18:31.324691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.958 [2024-07-25 19:18:31.324707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.958 [2024-07-25 19:18:31.324721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.958 [2024-07-25 19:18:31.324750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.958 qpair failed and we were unable to recover it. 00:27:38.958 [2024-07-25 19:18:31.334504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.958 [2024-07-25 19:18:31.334664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.958 [2024-07-25 19:18:31.334691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.958 [2024-07-25 19:18:31.334707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.958 [2024-07-25 19:18:31.334724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.958 [2024-07-25 19:18:31.334759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.958 qpair failed and we were unable to recover it. 00:27:38.958 [2024-07-25 19:18:31.344578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.958 [2024-07-25 19:18:31.344736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.958 [2024-07-25 19:18:31.344762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.958 [2024-07-25 19:18:31.344778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.958 [2024-07-25 19:18:31.344792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.958 [2024-07-25 19:18:31.344836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.958 qpair failed and we were unable to recover it. 00:27:38.958 [2024-07-25 19:18:31.354562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.958 [2024-07-25 19:18:31.354719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.958 [2024-07-25 19:18:31.354744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.958 [2024-07-25 19:18:31.354760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.958 [2024-07-25 19:18:31.354775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.958 [2024-07-25 19:18:31.354804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.958 qpair failed and we were unable to recover it. 00:27:38.958 [2024-07-25 19:18:31.364600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.958 [2024-07-25 19:18:31.364763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.958 [2024-07-25 19:18:31.364790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.958 [2024-07-25 19:18:31.364805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.958 [2024-07-25 19:18:31.364820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.958 [2024-07-25 19:18:31.364849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.958 qpair failed and we were unable to recover it. 00:27:38.958 [2024-07-25 19:18:31.374657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.958 [2024-07-25 19:18:31.374847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.959 [2024-07-25 19:18:31.374874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.959 [2024-07-25 19:18:31.374889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.959 [2024-07-25 19:18:31.374904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.959 [2024-07-25 19:18:31.374933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.959 qpair failed and we were unable to recover it. 00:27:38.959 [2024-07-25 19:18:31.384686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.959 [2024-07-25 19:18:31.384877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.959 [2024-07-25 19:18:31.384909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.959 [2024-07-25 19:18:31.384925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.959 [2024-07-25 19:18:31.384940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.959 [2024-07-25 19:18:31.384969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.959 qpair failed and we were unable to recover it. 00:27:38.959 [2024-07-25 19:18:31.394775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.959 [2024-07-25 19:18:31.394928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.959 [2024-07-25 19:18:31.394958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.959 [2024-07-25 19:18:31.394974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.959 [2024-07-25 19:18:31.394988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.959 [2024-07-25 19:18:31.395033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.959 qpair failed and we were unable to recover it. 00:27:38.959 [2024-07-25 19:18:31.404717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.959 [2024-07-25 19:18:31.404863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.959 [2024-07-25 19:18:31.404890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.959 [2024-07-25 19:18:31.404905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.959 [2024-07-25 19:18:31.404920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.959 [2024-07-25 19:18:31.404949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.959 qpair failed and we were unable to recover it. 00:27:38.959 [2024-07-25 19:18:31.414755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.959 [2024-07-25 19:18:31.414933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.959 [2024-07-25 19:18:31.414961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.959 [2024-07-25 19:18:31.414977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.959 [2024-07-25 19:18:31.414991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.959 [2024-07-25 19:18:31.415021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.959 qpair failed and we were unable to recover it. 00:27:38.959 [2024-07-25 19:18:31.424848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.959 [2024-07-25 19:18:31.425039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.959 [2024-07-25 19:18:31.425068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.959 [2024-07-25 19:18:31.425085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.959 [2024-07-25 19:18:31.425100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:38.959 [2024-07-25 19:18:31.425146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.959 qpair failed and we were unable to recover it. 00:27:39.218 [2024-07-25 19:18:31.434815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.218 [2024-07-25 19:18:31.434968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.218 [2024-07-25 19:18:31.434997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.218 [2024-07-25 19:18:31.435014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.218 [2024-07-25 19:18:31.435028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.218 [2024-07-25 19:18:31.435059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.218 qpair failed and we were unable to recover it. 00:27:39.218 [2024-07-25 19:18:31.444831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.218 [2024-07-25 19:18:31.444999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.218 [2024-07-25 19:18:31.445027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.218 [2024-07-25 19:18:31.445043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.218 [2024-07-25 19:18:31.445057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.218 [2024-07-25 19:18:31.445087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.218 qpair failed and we were unable to recover it. 00:27:39.218 [2024-07-25 19:18:31.454871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.218 [2024-07-25 19:18:31.455027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.218 [2024-07-25 19:18:31.455054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.218 [2024-07-25 19:18:31.455069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.218 [2024-07-25 19:18:31.455084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.218 [2024-07-25 19:18:31.455120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.218 qpair failed and we were unable to recover it. 00:27:39.218 [2024-07-25 19:18:31.464870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.218 [2024-07-25 19:18:31.465025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.218 [2024-07-25 19:18:31.465051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.218 [2024-07-25 19:18:31.465067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.218 [2024-07-25 19:18:31.465081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.218 [2024-07-25 19:18:31.465118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.218 qpair failed and we were unable to recover it. 00:27:39.218 [2024-07-25 19:18:31.474952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.218 [2024-07-25 19:18:31.475108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.218 [2024-07-25 19:18:31.475148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.218 [2024-07-25 19:18:31.475166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.218 [2024-07-25 19:18:31.475180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.218 [2024-07-25 19:18:31.475210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.218 qpair failed and we were unable to recover it. 00:27:39.218 [2024-07-25 19:18:31.484945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.218 [2024-07-25 19:18:31.485133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.218 [2024-07-25 19:18:31.485160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.218 [2024-07-25 19:18:31.485177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.218 [2024-07-25 19:18:31.485189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.219 [2024-07-25 19:18:31.485218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.219 qpair failed and we were unable to recover it. 00:27:39.219 [2024-07-25 19:18:31.494971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.219 [2024-07-25 19:18:31.495132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.219 [2024-07-25 19:18:31.495160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.219 [2024-07-25 19:18:31.495175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.219 [2024-07-25 19:18:31.495188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.219 [2024-07-25 19:18:31.495218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.219 qpair failed and we were unable to recover it. 00:27:39.219 [2024-07-25 19:18:31.505039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.219 [2024-07-25 19:18:31.505208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.219 [2024-07-25 19:18:31.505235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.219 [2024-07-25 19:18:31.505251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.219 [2024-07-25 19:18:31.505264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.219 [2024-07-25 19:18:31.505295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.219 qpair failed and we were unable to recover it. 00:27:39.219 [2024-07-25 19:18:31.515032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.219 [2024-07-25 19:18:31.515183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.219 [2024-07-25 19:18:31.515210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.219 [2024-07-25 19:18:31.515226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.219 [2024-07-25 19:18:31.515248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.219 [2024-07-25 19:18:31.515278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.219 qpair failed and we were unable to recover it. 00:27:39.219 [2024-07-25 19:18:31.525030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.219 [2024-07-25 19:18:31.525183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.219 [2024-07-25 19:18:31.525211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.219 [2024-07-25 19:18:31.525227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.219 [2024-07-25 19:18:31.525241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.219 [2024-07-25 19:18:31.525270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.219 qpair failed and we were unable to recover it. 00:27:39.219 [2024-07-25 19:18:31.535087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.219 [2024-07-25 19:18:31.535242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.219 [2024-07-25 19:18:31.535268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.219 [2024-07-25 19:18:31.535284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.219 [2024-07-25 19:18:31.535298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.219 [2024-07-25 19:18:31.535327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.219 qpair failed and we were unable to recover it. 00:27:39.219 [2024-07-25 19:18:31.545085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.219 [2024-07-25 19:18:31.545245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.219 [2024-07-25 19:18:31.545272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.219 [2024-07-25 19:18:31.545289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.219 [2024-07-25 19:18:31.545303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.219 [2024-07-25 19:18:31.545333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.219 qpair failed and we were unable to recover it. 00:27:39.219 [2024-07-25 19:18:31.555137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.219 [2024-07-25 19:18:31.555288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.219 [2024-07-25 19:18:31.555315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.219 [2024-07-25 19:18:31.555330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.219 [2024-07-25 19:18:31.555344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.219 [2024-07-25 19:18:31.555372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.219 qpair failed and we were unable to recover it. 00:27:39.219 [2024-07-25 19:18:31.565183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.219 [2024-07-25 19:18:31.565341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.219 [2024-07-25 19:18:31.565368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.219 [2024-07-25 19:18:31.565384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.219 [2024-07-25 19:18:31.565398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.219 [2024-07-25 19:18:31.565443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.219 qpair failed and we were unable to recover it. 00:27:39.219 [2024-07-25 19:18:31.575201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.219 [2024-07-25 19:18:31.575350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.219 [2024-07-25 19:18:31.575377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.219 [2024-07-25 19:18:31.575393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.219 [2024-07-25 19:18:31.575407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.219 [2024-07-25 19:18:31.575437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.219 qpair failed and we were unable to recover it. 00:27:39.219 [2024-07-25 19:18:31.585255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.219 [2024-07-25 19:18:31.585402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.219 [2024-07-25 19:18:31.585429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.219 [2024-07-25 19:18:31.585444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.219 [2024-07-25 19:18:31.585458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.219 [2024-07-25 19:18:31.585487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.219 qpair failed and we were unable to recover it. 00:27:39.219 [2024-07-25 19:18:31.595258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.219 [2024-07-25 19:18:31.595446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.219 [2024-07-25 19:18:31.595490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.219 [2024-07-25 19:18:31.595506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.219 [2024-07-25 19:18:31.595521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.219 [2024-07-25 19:18:31.595565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.219 qpair failed and we were unable to recover it. 00:27:39.219 [2024-07-25 19:18:31.605252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.219 [2024-07-25 19:18:31.605397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.219 [2024-07-25 19:18:31.605424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.219 [2024-07-25 19:18:31.605439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.219 [2024-07-25 19:18:31.605459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.219 [2024-07-25 19:18:31.605490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.219 qpair failed and we were unable to recover it. 00:27:39.219 [2024-07-25 19:18:31.615454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.219 [2024-07-25 19:18:31.615611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.220 [2024-07-25 19:18:31.615640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.220 [2024-07-25 19:18:31.615660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.220 [2024-07-25 19:18:31.615674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.220 [2024-07-25 19:18:31.615704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.220 qpair failed and we were unable to recover it. 00:27:39.220 [2024-07-25 19:18:31.625360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.220 [2024-07-25 19:18:31.625502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.220 [2024-07-25 19:18:31.625529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.220 [2024-07-25 19:18:31.625545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.220 [2024-07-25 19:18:31.625559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.220 [2024-07-25 19:18:31.625588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.220 qpair failed and we were unable to recover it. 00:27:39.220 [2024-07-25 19:18:31.635370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.220 [2024-07-25 19:18:31.635515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.220 [2024-07-25 19:18:31.635542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.220 [2024-07-25 19:18:31.635558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.220 [2024-07-25 19:18:31.635571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.220 [2024-07-25 19:18:31.635602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.220 qpair failed and we were unable to recover it. 00:27:39.220 [2024-07-25 19:18:31.645411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.220 [2024-07-25 19:18:31.645568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.220 [2024-07-25 19:18:31.645595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.220 [2024-07-25 19:18:31.645611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.220 [2024-07-25 19:18:31.645624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.220 [2024-07-25 19:18:31.645668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.220 qpair failed and we were unable to recover it. 00:27:39.220 [2024-07-25 19:18:31.655436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.220 [2024-07-25 19:18:31.655605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.220 [2024-07-25 19:18:31.655632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.220 [2024-07-25 19:18:31.655648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.220 [2024-07-25 19:18:31.655661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.220 [2024-07-25 19:18:31.655690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.220 qpair failed and we were unable to recover it. 00:27:39.220 [2024-07-25 19:18:31.665462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.220 [2024-07-25 19:18:31.665642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.220 [2024-07-25 19:18:31.665669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.220 [2024-07-25 19:18:31.665685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.220 [2024-07-25 19:18:31.665698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.220 [2024-07-25 19:18:31.665728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.220 qpair failed and we were unable to recover it. 00:27:39.220 [2024-07-25 19:18:31.675552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.220 [2024-07-25 19:18:31.675688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.220 [2024-07-25 19:18:31.675715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.220 [2024-07-25 19:18:31.675731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.220 [2024-07-25 19:18:31.675744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.220 [2024-07-25 19:18:31.675775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.220 qpair failed and we were unable to recover it. 00:27:39.220 [2024-07-25 19:18:31.685548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.220 [2024-07-25 19:18:31.685713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.220 [2024-07-25 19:18:31.685743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.220 [2024-07-25 19:18:31.685759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.220 [2024-07-25 19:18:31.685791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.220 [2024-07-25 19:18:31.685821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.220 qpair failed and we were unable to recover it. 00:27:39.479 [2024-07-25 19:18:31.695512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.479 [2024-07-25 19:18:31.695667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.479 [2024-07-25 19:18:31.695695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.479 [2024-07-25 19:18:31.695719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.479 [2024-07-25 19:18:31.695739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.479 [2024-07-25 19:18:31.695769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.479 qpair failed and we were unable to recover it. 00:27:39.479 [2024-07-25 19:18:31.705576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.479 [2024-07-25 19:18:31.705724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.479 [2024-07-25 19:18:31.705753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.479 [2024-07-25 19:18:31.705769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.479 [2024-07-25 19:18:31.705784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.479 [2024-07-25 19:18:31.705814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.479 qpair failed and we were unable to recover it. 00:27:39.479 [2024-07-25 19:18:31.715613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.479 [2024-07-25 19:18:31.715756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.479 [2024-07-25 19:18:31.715784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.479 [2024-07-25 19:18:31.715800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.479 [2024-07-25 19:18:31.715815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.479 [2024-07-25 19:18:31.715859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.479 qpair failed and we were unable to recover it. 00:27:39.479 [2024-07-25 19:18:31.725615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.479 [2024-07-25 19:18:31.725760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.479 [2024-07-25 19:18:31.725788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.479 [2024-07-25 19:18:31.725804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.479 [2024-07-25 19:18:31.725817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.479 [2024-07-25 19:18:31.725847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.479 qpair failed and we were unable to recover it. 00:27:39.479 [2024-07-25 19:18:31.735649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.479 [2024-07-25 19:18:31.735799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.479 [2024-07-25 19:18:31.735826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.479 [2024-07-25 19:18:31.735842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.479 [2024-07-25 19:18:31.735856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.479 [2024-07-25 19:18:31.735885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.479 qpair failed and we were unable to recover it. 00:27:39.479 [2024-07-25 19:18:31.745685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.479 [2024-07-25 19:18:31.745875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.479 [2024-07-25 19:18:31.745903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.479 [2024-07-25 19:18:31.745919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.479 [2024-07-25 19:18:31.745933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.479 [2024-07-25 19:18:31.745963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.479 qpair failed and we were unable to recover it. 00:27:39.479 [2024-07-25 19:18:31.755719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.479 [2024-07-25 19:18:31.755888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.479 [2024-07-25 19:18:31.755930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.479 [2024-07-25 19:18:31.755947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.479 [2024-07-25 19:18:31.755960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.479 [2024-07-25 19:18:31.755989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.479 qpair failed and we were unable to recover it. 00:27:39.479 [2024-07-25 19:18:31.765699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.479 [2024-07-25 19:18:31.765843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.479 [2024-07-25 19:18:31.765871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.479 [2024-07-25 19:18:31.765887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.479 [2024-07-25 19:18:31.765901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.479 [2024-07-25 19:18:31.765930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.479 qpair failed and we were unable to recover it. 00:27:39.479 [2024-07-25 19:18:31.775755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.479 [2024-07-25 19:18:31.775904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.479 [2024-07-25 19:18:31.775931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.479 [2024-07-25 19:18:31.775946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.479 [2024-07-25 19:18:31.775960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.479 [2024-07-25 19:18:31.775989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.479 qpair failed and we were unable to recover it. 00:27:39.479 [2024-07-25 19:18:31.785759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.479 [2024-07-25 19:18:31.785902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.479 [2024-07-25 19:18:31.785929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.479 [2024-07-25 19:18:31.785950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.479 [2024-07-25 19:18:31.785966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.479 [2024-07-25 19:18:31.785995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.479 qpair failed and we were unable to recover it. 00:27:39.479 [2024-07-25 19:18:31.795790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.479 [2024-07-25 19:18:31.795947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.479 [2024-07-25 19:18:31.795974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.479 [2024-07-25 19:18:31.795990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.479 [2024-07-25 19:18:31.796004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.479 [2024-07-25 19:18:31.796034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.479 qpair failed and we were unable to recover it. 00:27:39.479 [2024-07-25 19:18:31.805822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.479 [2024-07-25 19:18:31.805962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.479 [2024-07-25 19:18:31.805989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.479 [2024-07-25 19:18:31.806005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.479 [2024-07-25 19:18:31.806019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.479 [2024-07-25 19:18:31.806049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.479 qpair failed and we were unable to recover it. 00:27:39.479 [2024-07-25 19:18:31.815950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.479 [2024-07-25 19:18:31.816159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.479 [2024-07-25 19:18:31.816186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.479 [2024-07-25 19:18:31.816202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.479 [2024-07-25 19:18:31.816217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.479 [2024-07-25 19:18:31.816247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.479 qpair failed and we were unable to recover it. 00:27:39.479 [2024-07-25 19:18:31.825899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.479 [2024-07-25 19:18:31.826062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.479 [2024-07-25 19:18:31.826089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.479 [2024-07-25 19:18:31.826110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.479 [2024-07-25 19:18:31.826126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.479 [2024-07-25 19:18:31.826164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.479 qpair failed and we were unable to recover it. 00:27:39.479 [2024-07-25 19:18:31.835911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.479 [2024-07-25 19:18:31.836060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.480 [2024-07-25 19:18:31.836088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.480 [2024-07-25 19:18:31.836109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.480 [2024-07-25 19:18:31.836125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.480 [2024-07-25 19:18:31.836166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.480 qpair failed and we were unable to recover it. 00:27:39.480 [2024-07-25 19:18:31.845948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.480 [2024-07-25 19:18:31.846100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.480 [2024-07-25 19:18:31.846147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.480 [2024-07-25 19:18:31.846163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.480 [2024-07-25 19:18:31.846177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.480 [2024-07-25 19:18:31.846211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.480 qpair failed and we were unable to recover it. 00:27:39.480 [2024-07-25 19:18:31.855980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.480 [2024-07-25 19:18:31.856138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.480 [2024-07-25 19:18:31.856164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.480 [2024-07-25 19:18:31.856180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.480 [2024-07-25 19:18:31.856194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.480 [2024-07-25 19:18:31.856234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.480 qpair failed and we were unable to recover it. 00:27:39.480 [2024-07-25 19:18:31.866060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.480 [2024-07-25 19:18:31.866217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.480 [2024-07-25 19:18:31.866244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.480 [2024-07-25 19:18:31.866259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.480 [2024-07-25 19:18:31.866273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.480 [2024-07-25 19:18:31.866303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.480 qpair failed and we were unable to recover it. 00:27:39.480 [2024-07-25 19:18:31.876020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.480 [2024-07-25 19:18:31.876169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.480 [2024-07-25 19:18:31.876196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.480 [2024-07-25 19:18:31.876217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.480 [2024-07-25 19:18:31.876232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.480 [2024-07-25 19:18:31.876263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.480 qpair failed and we were unable to recover it. 00:27:39.480 [2024-07-25 19:18:31.886062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.480 [2024-07-25 19:18:31.886216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.480 [2024-07-25 19:18:31.886243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.480 [2024-07-25 19:18:31.886259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.480 [2024-07-25 19:18:31.886275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.480 [2024-07-25 19:18:31.886305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.480 qpair failed and we were unable to recover it. 00:27:39.480 [2024-07-25 19:18:31.896094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.480 [2024-07-25 19:18:31.896272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.480 [2024-07-25 19:18:31.896299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.480 [2024-07-25 19:18:31.896315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.480 [2024-07-25 19:18:31.896330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.480 [2024-07-25 19:18:31.896359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.480 qpair failed and we were unable to recover it. 00:27:39.480 [2024-07-25 19:18:31.906140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.480 [2024-07-25 19:18:31.906282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.480 [2024-07-25 19:18:31.906309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.480 [2024-07-25 19:18:31.906324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.480 [2024-07-25 19:18:31.906338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.480 [2024-07-25 19:18:31.906369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.480 qpair failed and we were unable to recover it. 00:27:39.480 [2024-07-25 19:18:31.916141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.480 [2024-07-25 19:18:31.916310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.480 [2024-07-25 19:18:31.916337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.480 [2024-07-25 19:18:31.916365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.480 [2024-07-25 19:18:31.916378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.480 [2024-07-25 19:18:31.916408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.480 qpair failed and we were unable to recover it. 00:27:39.480 [2024-07-25 19:18:31.926248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.480 [2024-07-25 19:18:31.926388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.480 [2024-07-25 19:18:31.926415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.480 [2024-07-25 19:18:31.926432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.480 [2024-07-25 19:18:31.926446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.480 [2024-07-25 19:18:31.926475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.480 qpair failed and we were unable to recover it. 00:27:39.480 [2024-07-25 19:18:31.936217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.480 [2024-07-25 19:18:31.936391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.480 [2024-07-25 19:18:31.936419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.480 [2024-07-25 19:18:31.936434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.480 [2024-07-25 19:18:31.936448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.480 [2024-07-25 19:18:31.936476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.480 qpair failed and we were unable to recover it. 00:27:39.480 [2024-07-25 19:18:31.946282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.480 [2024-07-25 19:18:31.946432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.480 [2024-07-25 19:18:31.946461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.480 [2024-07-25 19:18:31.946479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.480 [2024-07-25 19:18:31.946493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.480 [2024-07-25 19:18:31.946523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.480 qpair failed and we were unable to recover it. 00:27:39.738 [2024-07-25 19:18:31.956311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.738 [2024-07-25 19:18:31.956527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.738 [2024-07-25 19:18:31.956570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.738 [2024-07-25 19:18:31.956586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.738 [2024-07-25 19:18:31.956598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.738 [2024-07-25 19:18:31.956628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.738 qpair failed and we were unable to recover it. 00:27:39.738 [2024-07-25 19:18:31.966291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.738 [2024-07-25 19:18:31.966434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.738 [2024-07-25 19:18:31.966467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.738 [2024-07-25 19:18:31.966484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.739 [2024-07-25 19:18:31.966498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.739 [2024-07-25 19:18:31.966527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.739 qpair failed and we were unable to recover it. 00:27:39.739 [2024-07-25 19:18:31.976451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.739 [2024-07-25 19:18:31.976605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.739 [2024-07-25 19:18:31.976632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.739 [2024-07-25 19:18:31.976648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.739 [2024-07-25 19:18:31.976661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.739 [2024-07-25 19:18:31.976690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.739 qpair failed and we were unable to recover it. 00:27:39.739 [2024-07-25 19:18:31.986379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.739 [2024-07-25 19:18:31.986528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.739 [2024-07-25 19:18:31.986556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.739 [2024-07-25 19:18:31.986572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.739 [2024-07-25 19:18:31.986586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.739 [2024-07-25 19:18:31.986615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.739 qpair failed and we were unable to recover it. 00:27:39.739 [2024-07-25 19:18:31.996422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.739 [2024-07-25 19:18:31.996570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.739 [2024-07-25 19:18:31.996598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.739 [2024-07-25 19:18:31.996614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.739 [2024-07-25 19:18:31.996628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.739 [2024-07-25 19:18:31.996659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.739 qpair failed and we were unable to recover it. 00:27:39.739 [2024-07-25 19:18:32.006421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.739 [2024-07-25 19:18:32.006568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.739 [2024-07-25 19:18:32.006594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.739 [2024-07-25 19:18:32.006609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.739 [2024-07-25 19:18:32.006623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.739 [2024-07-25 19:18:32.006652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.739 qpair failed and we were unable to recover it. 00:27:39.739 [2024-07-25 19:18:32.016546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.739 [2024-07-25 19:18:32.016698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.739 [2024-07-25 19:18:32.016724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.739 [2024-07-25 19:18:32.016740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.739 [2024-07-25 19:18:32.016754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.739 [2024-07-25 19:18:32.016782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.739 qpair failed and we were unable to recover it. 00:27:39.739 [2024-07-25 19:18:32.026474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.739 [2024-07-25 19:18:32.026628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.739 [2024-07-25 19:18:32.026655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.739 [2024-07-25 19:18:32.026671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.739 [2024-07-25 19:18:32.026684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.739 [2024-07-25 19:18:32.026729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.739 qpair failed and we were unable to recover it. 00:27:39.739 [2024-07-25 19:18:32.036490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.739 [2024-07-25 19:18:32.036639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.739 [2024-07-25 19:18:32.036667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.739 [2024-07-25 19:18:32.036683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.739 [2024-07-25 19:18:32.036696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.739 [2024-07-25 19:18:32.036740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.739 qpair failed and we were unable to recover it. 00:27:39.739 [2024-07-25 19:18:32.046553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.739 [2024-07-25 19:18:32.046716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.739 [2024-07-25 19:18:32.046743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.739 [2024-07-25 19:18:32.046759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.739 [2024-07-25 19:18:32.046788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.739 [2024-07-25 19:18:32.046817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.739 qpair failed and we were unable to recover it. 00:27:39.739 [2024-07-25 19:18:32.056546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.739 [2024-07-25 19:18:32.056728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.739 [2024-07-25 19:18:32.056760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.739 [2024-07-25 19:18:32.056777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.739 [2024-07-25 19:18:32.056791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.739 [2024-07-25 19:18:32.056820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.739 qpair failed and we were unable to recover it. 00:27:39.739 [2024-07-25 19:18:32.066572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.739 [2024-07-25 19:18:32.066723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.739 [2024-07-25 19:18:32.066750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.739 [2024-07-25 19:18:32.066766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.739 [2024-07-25 19:18:32.066780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.739 [2024-07-25 19:18:32.066809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.739 qpair failed and we were unable to recover it. 00:27:39.739 [2024-07-25 19:18:32.076614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.739 [2024-07-25 19:18:32.076764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.739 [2024-07-25 19:18:32.076791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.739 [2024-07-25 19:18:32.076807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.739 [2024-07-25 19:18:32.076821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.739 [2024-07-25 19:18:32.076850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.739 qpair failed and we were unable to recover it. 00:27:39.739 [2024-07-25 19:18:32.086662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.739 [2024-07-25 19:18:32.086805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.739 [2024-07-25 19:18:32.086833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.739 [2024-07-25 19:18:32.086849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.739 [2024-07-25 19:18:32.086863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.739 [2024-07-25 19:18:32.086907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.739 qpair failed and we were unable to recover it. 00:27:39.739 [2024-07-25 19:18:32.096701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.739 [2024-07-25 19:18:32.096847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.739 [2024-07-25 19:18:32.096875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.739 [2024-07-25 19:18:32.096891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.740 [2024-07-25 19:18:32.096905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.740 [2024-07-25 19:18:32.096939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.740 qpair failed and we were unable to recover it. 00:27:39.740 [2024-07-25 19:18:32.106725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.740 [2024-07-25 19:18:32.106907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.740 [2024-07-25 19:18:32.106933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.740 [2024-07-25 19:18:32.106949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.740 [2024-07-25 19:18:32.106963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.740 [2024-07-25 19:18:32.106994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.740 qpair failed and we were unable to recover it. 00:27:39.740 [2024-07-25 19:18:32.116808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.740 [2024-07-25 19:18:32.116985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.740 [2024-07-25 19:18:32.117013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.740 [2024-07-25 19:18:32.117045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.740 [2024-07-25 19:18:32.117059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.740 [2024-07-25 19:18:32.117086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.740 qpair failed and we were unable to recover it. 00:27:39.740 [2024-07-25 19:18:32.126803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.740 [2024-07-25 19:18:32.126950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.740 [2024-07-25 19:18:32.126977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.740 [2024-07-25 19:18:32.126993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.740 [2024-07-25 19:18:32.127006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.740 [2024-07-25 19:18:32.127050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.740 qpair failed and we were unable to recover it. 00:27:39.740 [2024-07-25 19:18:32.136782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.740 [2024-07-25 19:18:32.136928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.740 [2024-07-25 19:18:32.136955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.740 [2024-07-25 19:18:32.136970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.740 [2024-07-25 19:18:32.136983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.740 [2024-07-25 19:18:32.137012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.740 qpair failed and we were unable to recover it. 00:27:39.740 [2024-07-25 19:18:32.146854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.740 [2024-07-25 19:18:32.147000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.740 [2024-07-25 19:18:32.147033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.740 [2024-07-25 19:18:32.147050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.740 [2024-07-25 19:18:32.147063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.740 [2024-07-25 19:18:32.147114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.740 qpair failed and we were unable to recover it. 00:27:39.740 [2024-07-25 19:18:32.156831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.740 [2024-07-25 19:18:32.157007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.740 [2024-07-25 19:18:32.157050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.740 [2024-07-25 19:18:32.157065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.740 [2024-07-25 19:18:32.157079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.740 [2024-07-25 19:18:32.157129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.740 qpair failed and we were unable to recover it. 00:27:39.740 [2024-07-25 19:18:32.166872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.740 [2024-07-25 19:18:32.167023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.740 [2024-07-25 19:18:32.167051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.740 [2024-07-25 19:18:32.167071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.740 [2024-07-25 19:18:32.167086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.740 [2024-07-25 19:18:32.167124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.740 qpair failed and we were unable to recover it. 00:27:39.740 [2024-07-25 19:18:32.176899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.740 [2024-07-25 19:18:32.177098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.740 [2024-07-25 19:18:32.177135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.740 [2024-07-25 19:18:32.177151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.740 [2024-07-25 19:18:32.177164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.740 [2024-07-25 19:18:32.177193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.740 qpair failed and we were unable to recover it. 00:27:39.740 [2024-07-25 19:18:32.186928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.740 [2024-07-25 19:18:32.187077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.740 [2024-07-25 19:18:32.187110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.740 [2024-07-25 19:18:32.187128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.740 [2024-07-25 19:18:32.187143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.740 [2024-07-25 19:18:32.187177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.740 qpair failed and we were unable to recover it. 00:27:39.740 [2024-07-25 19:18:32.196930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.740 [2024-07-25 19:18:32.197078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.740 [2024-07-25 19:18:32.197112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.740 [2024-07-25 19:18:32.197130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.740 [2024-07-25 19:18:32.197144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.740 [2024-07-25 19:18:32.197172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.740 qpair failed and we were unable to recover it. 00:27:39.740 [2024-07-25 19:18:32.207064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.740 [2024-07-25 19:18:32.207213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.740 [2024-07-25 19:18:32.207242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.740 [2024-07-25 19:18:32.207259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.740 [2024-07-25 19:18:32.207273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.740 [2024-07-25 19:18:32.207303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.740 qpair failed and we were unable to recover it. 00:27:39.999 [2024-07-25 19:18:32.217003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.999 [2024-07-25 19:18:32.217152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.999 [2024-07-25 19:18:32.217180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.999 [2024-07-25 19:18:32.217196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.999 [2024-07-25 19:18:32.217209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.999 [2024-07-25 19:18:32.217240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.999 qpair failed and we were unable to recover it. 00:27:39.999 [2024-07-25 19:18:32.227011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.999 [2024-07-25 19:18:32.227167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.999 [2024-07-25 19:18:32.227196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.999 [2024-07-25 19:18:32.227213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.999 [2024-07-25 19:18:32.227226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.999 [2024-07-25 19:18:32.227255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.999 qpair failed and we were unable to recover it. 00:27:39.999 [2024-07-25 19:18:32.237046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.999 [2024-07-25 19:18:32.237193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.999 [2024-07-25 19:18:32.237227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.999 [2024-07-25 19:18:32.237244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.999 [2024-07-25 19:18:32.237258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.999 [2024-07-25 19:18:32.237287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.999 qpair failed and we were unable to recover it. 00:27:39.999 [2024-07-25 19:18:32.247073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.999 [2024-07-25 19:18:32.247231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.999 [2024-07-25 19:18:32.247260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.999 [2024-07-25 19:18:32.247276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.999 [2024-07-25 19:18:32.247290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.999 [2024-07-25 19:18:32.247319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.999 qpair failed and we were unable to recover it. 00:27:39.999 [2024-07-25 19:18:32.257114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.999 [2024-07-25 19:18:32.257264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.999 [2024-07-25 19:18:32.257291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.999 [2024-07-25 19:18:32.257307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.999 [2024-07-25 19:18:32.257321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.999 [2024-07-25 19:18:32.257350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.999 qpair failed and we were unable to recover it. 00:27:39.999 [2024-07-25 19:18:32.267156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.999 [2024-07-25 19:18:32.267314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.999 [2024-07-25 19:18:32.267342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.999 [2024-07-25 19:18:32.267358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.999 [2024-07-25 19:18:32.267372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.999 [2024-07-25 19:18:32.267403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.999 qpair failed and we were unable to recover it. 00:27:39.999 [2024-07-25 19:18:32.277189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.999 [2024-07-25 19:18:32.277336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.999 [2024-07-25 19:18:32.277362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.999 [2024-07-25 19:18:32.277377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.999 [2024-07-25 19:18:32.277399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.999 [2024-07-25 19:18:32.277445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.999 qpair failed and we were unable to recover it. 00:27:39.999 [2024-07-25 19:18:32.287202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.999 [2024-07-25 19:18:32.287349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.999 [2024-07-25 19:18:32.287376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.999 [2024-07-25 19:18:32.287392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.999 [2024-07-25 19:18:32.287406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.999 [2024-07-25 19:18:32.287434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.999 qpair failed and we were unable to recover it. 00:27:39.999 [2024-07-25 19:18:32.297257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.999 [2024-07-25 19:18:32.297410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.999 [2024-07-25 19:18:32.297437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.999 [2024-07-25 19:18:32.297453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.999 [2024-07-25 19:18:32.297467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.999 [2024-07-25 19:18:32.297496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.999 qpair failed and we were unable to recover it. 00:27:39.999 [2024-07-25 19:18:32.307339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.999 [2024-07-25 19:18:32.307494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.999 [2024-07-25 19:18:32.307521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.999 [2024-07-25 19:18:32.307537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.999 [2024-07-25 19:18:32.307550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:39.999 [2024-07-25 19:18:32.307594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.999 qpair failed and we were unable to recover it. 00:27:39.999 [2024-07-25 19:18:32.317347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.000 [2024-07-25 19:18:32.317548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.000 [2024-07-25 19:18:32.317590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.000 [2024-07-25 19:18:32.317606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.000 [2024-07-25 19:18:32.317619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:40.000 [2024-07-25 19:18:32.317662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.000 qpair failed and we were unable to recover it. 00:27:40.000 [2024-07-25 19:18:32.327325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.000 [2024-07-25 19:18:32.327476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.000 [2024-07-25 19:18:32.327504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.000 [2024-07-25 19:18:32.327520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.000 [2024-07-25 19:18:32.327534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:40.000 [2024-07-25 19:18:32.327562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.000 qpair failed and we were unable to recover it. 00:27:40.000 [2024-07-25 19:18:32.337367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.000 [2024-07-25 19:18:32.337518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.000 [2024-07-25 19:18:32.337546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.000 [2024-07-25 19:18:32.337562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.000 [2024-07-25 19:18:32.337576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:40.000 [2024-07-25 19:18:32.337604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.000 qpair failed and we were unable to recover it. 00:27:40.000 [2024-07-25 19:18:32.347432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.000 [2024-07-25 19:18:32.347636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.000 [2024-07-25 19:18:32.347663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.000 [2024-07-25 19:18:32.347681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.000 [2024-07-25 19:18:32.347695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:40.000 [2024-07-25 19:18:32.347739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.000 qpair failed and we were unable to recover it. 00:27:40.000 [2024-07-25 19:18:32.357423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.000 [2024-07-25 19:18:32.357575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.000 [2024-07-25 19:18:32.357603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.000 [2024-07-25 19:18:32.357619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.000 [2024-07-25 19:18:32.357633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:40.000 [2024-07-25 19:18:32.357677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.000 qpair failed and we were unable to recover it. 00:27:40.000 [2024-07-25 19:18:32.367522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.000 [2024-07-25 19:18:32.367667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.000 [2024-07-25 19:18:32.367695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.000 [2024-07-25 19:18:32.367711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.000 [2024-07-25 19:18:32.367729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:40.000 [2024-07-25 19:18:32.367758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.000 qpair failed and we were unable to recover it. 00:27:40.000 [2024-07-25 19:18:32.377472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.000 [2024-07-25 19:18:32.377644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.000 [2024-07-25 19:18:32.377672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.000 [2024-07-25 19:18:32.377688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.000 [2024-07-25 19:18:32.377701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:40.000 [2024-07-25 19:18:32.377730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.000 qpair failed and we were unable to recover it. 00:27:40.000 [2024-07-25 19:18:32.387508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.000 [2024-07-25 19:18:32.387670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.000 [2024-07-25 19:18:32.387711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.000 [2024-07-25 19:18:32.387727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.000 [2024-07-25 19:18:32.387740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:40.000 [2024-07-25 19:18:32.387768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.000 qpair failed and we were unable to recover it. 00:27:40.000 [2024-07-25 19:18:32.397556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.000 [2024-07-25 19:18:32.397708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.000 [2024-07-25 19:18:32.397736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.000 [2024-07-25 19:18:32.397751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.000 [2024-07-25 19:18:32.397764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:40.000 [2024-07-25 19:18:32.397809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.000 qpair failed and we were unable to recover it. 00:27:40.000 [2024-07-25 19:18:32.407527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.000 [2024-07-25 19:18:32.407682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.000 [2024-07-25 19:18:32.407709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.000 [2024-07-25 19:18:32.407725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.000 [2024-07-25 19:18:32.407738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:40.000 [2024-07-25 19:18:32.407767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.000 qpair failed and we were unable to recover it. 00:27:40.000 [2024-07-25 19:18:32.417569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.000 [2024-07-25 19:18:32.417724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.000 [2024-07-25 19:18:32.417751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.000 [2024-07-25 19:18:32.417766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.000 [2024-07-25 19:18:32.417780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:40.000 [2024-07-25 19:18:32.417809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.000 qpair failed and we were unable to recover it. 00:27:40.000 [2024-07-25 19:18:32.427675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.000 [2024-07-25 19:18:32.427821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.000 [2024-07-25 19:18:32.427848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.000 [2024-07-25 19:18:32.427863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.000 [2024-07-25 19:18:32.427877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:40.000 [2024-07-25 19:18:32.427905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.000 qpair failed and we were unable to recover it. 00:27:40.000 [2024-07-25 19:18:32.437657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.000 [2024-07-25 19:18:32.437804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.000 [2024-07-25 19:18:32.437835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.000 [2024-07-25 19:18:32.437851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.000 [2024-07-25 19:18:32.437880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:40.000 [2024-07-25 19:18:32.437911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.001 qpair failed and we were unable to recover it. 00:27:40.001 [2024-07-25 19:18:32.447645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.001 [2024-07-25 19:18:32.447786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.001 [2024-07-25 19:18:32.447813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.001 [2024-07-25 19:18:32.447828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.001 [2024-07-25 19:18:32.447842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:40.001 [2024-07-25 19:18:32.447872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.001 qpair failed and we were unable to recover it. 00:27:40.001 [2024-07-25 19:18:32.457735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.001 [2024-07-25 19:18:32.457888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.001 [2024-07-25 19:18:32.457915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.001 [2024-07-25 19:18:32.457931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.001 [2024-07-25 19:18:32.457950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:40.001 [2024-07-25 19:18:32.457980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.001 qpair failed and we were unable to recover it. 00:27:40.001 [2024-07-25 19:18:32.467805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.001 [2024-07-25 19:18:32.467954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.001 [2024-07-25 19:18:32.467985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.001 [2024-07-25 19:18:32.468004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.001 [2024-07-25 19:18:32.468034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:40.001 [2024-07-25 19:18:32.468064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.001 qpair failed and we were unable to recover it. 00:27:40.259 [2024-07-25 19:18:32.477734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.259 [2024-07-25 19:18:32.477885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.259 [2024-07-25 19:18:32.477915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.259 [2024-07-25 19:18:32.477942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.259 [2024-07-25 19:18:32.477955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:40.259 [2024-07-25 19:18:32.478008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.259 qpair failed and we were unable to recover it. 00:27:40.259 [2024-07-25 19:18:32.487811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.259 [2024-07-25 19:18:32.487958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.259 [2024-07-25 19:18:32.487985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.259 [2024-07-25 19:18:32.488001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.259 [2024-07-25 19:18:32.488014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:40.259 [2024-07-25 19:18:32.488057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.259 qpair failed and we were unable to recover it. 00:27:40.259 [2024-07-25 19:18:32.497874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.259 [2024-07-25 19:18:32.498024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.259 [2024-07-25 19:18:32.498051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.259 [2024-07-25 19:18:32.498067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.259 [2024-07-25 19:18:32.498081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:40.259 [2024-07-25 19:18:32.498119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.259 qpair failed and we were unable to recover it. 00:27:40.259 [2024-07-25 19:18:32.507865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.259 [2024-07-25 19:18:32.508044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.260 [2024-07-25 19:18:32.508078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.260 [2024-07-25 19:18:32.508096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.260 [2024-07-25 19:18:32.508120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.260 [2024-07-25 19:18:32.508158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.260 qpair failed and we were unable to recover it. 00:27:40.260 [2024-07-25 19:18:32.517935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.260 [2024-07-25 19:18:32.518079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.260 [2024-07-25 19:18:32.518121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.260 [2024-07-25 19:18:32.518143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.260 [2024-07-25 19:18:32.518159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.260 [2024-07-25 19:18:32.518205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.260 qpair failed and we were unable to recover it. 00:27:40.260 [2024-07-25 19:18:32.527906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.260 [2024-07-25 19:18:32.528052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.260 [2024-07-25 19:18:32.528081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.260 [2024-07-25 19:18:32.528098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.260 [2024-07-25 19:18:32.528120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.260 [2024-07-25 19:18:32.528153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.260 qpair failed and we were unable to recover it. 00:27:40.260 [2024-07-25 19:18:32.537932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.260 [2024-07-25 19:18:32.538079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.260 [2024-07-25 19:18:32.538118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.260 [2024-07-25 19:18:32.538142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.260 [2024-07-25 19:18:32.538158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.260 [2024-07-25 19:18:32.538192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.260 qpair failed and we were unable to recover it. 00:27:40.260 [2024-07-25 19:18:32.548019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.260 [2024-07-25 19:18:32.548179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.260 [2024-07-25 19:18:32.548208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.260 [2024-07-25 19:18:32.548230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.260 [2024-07-25 19:18:32.548246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.260 [2024-07-25 19:18:32.548291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.260 qpair failed and we were unable to recover it. 00:27:40.260 [2024-07-25 19:18:32.558013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.260 [2024-07-25 19:18:32.558168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.260 [2024-07-25 19:18:32.558196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.260 [2024-07-25 19:18:32.558212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.260 [2024-07-25 19:18:32.558228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.260 [2024-07-25 19:18:32.558260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.260 qpair failed and we were unable to recover it. 00:27:40.260 [2024-07-25 19:18:32.568023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.260 [2024-07-25 19:18:32.568187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.260 [2024-07-25 19:18:32.568216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.260 [2024-07-25 19:18:32.568232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.260 [2024-07-25 19:18:32.568247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.260 [2024-07-25 19:18:32.568279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.260 qpair failed and we were unable to recover it. 00:27:40.260 [2024-07-25 19:18:32.578121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.260 [2024-07-25 19:18:32.578304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.260 [2024-07-25 19:18:32.578332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.260 [2024-07-25 19:18:32.578349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.260 [2024-07-25 19:18:32.578364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.260 [2024-07-25 19:18:32.578410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.260 qpair failed and we were unable to recover it. 00:27:40.260 [2024-07-25 19:18:32.588152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.260 [2024-07-25 19:18:32.588318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.260 [2024-07-25 19:18:32.588346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.260 [2024-07-25 19:18:32.588362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.260 [2024-07-25 19:18:32.588377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.260 [2024-07-25 19:18:32.588410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.260 qpair failed and we were unable to recover it. 00:27:40.260 [2024-07-25 19:18:32.598146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.260 [2024-07-25 19:18:32.598291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.260 [2024-07-25 19:18:32.598320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.260 [2024-07-25 19:18:32.598337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.260 [2024-07-25 19:18:32.598351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.260 [2024-07-25 19:18:32.598383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.260 qpair failed and we were unable to recover it. 00:27:40.260 [2024-07-25 19:18:32.608150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.260 [2024-07-25 19:18:32.608298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.260 [2024-07-25 19:18:32.608326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.260 [2024-07-25 19:18:32.608342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.260 [2024-07-25 19:18:32.608355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.260 [2024-07-25 19:18:32.608387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.260 qpair failed and we were unable to recover it. 00:27:40.260 [2024-07-25 19:18:32.618209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.260 [2024-07-25 19:18:32.618380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.260 [2024-07-25 19:18:32.618407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.260 [2024-07-25 19:18:32.618438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.260 [2024-07-25 19:18:32.618453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.261 [2024-07-25 19:18:32.618499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.261 qpair failed and we were unable to recover it. 00:27:40.261 [2024-07-25 19:18:32.628209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.261 [2024-07-25 19:18:32.628390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.261 [2024-07-25 19:18:32.628418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.261 [2024-07-25 19:18:32.628435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.261 [2024-07-25 19:18:32.628448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.261 [2024-07-25 19:18:32.628480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.261 qpair failed and we were unable to recover it. 00:27:40.261 [2024-07-25 19:18:32.638232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.261 [2024-07-25 19:18:32.638406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.261 [2024-07-25 19:18:32.638434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.261 [2024-07-25 19:18:32.638457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.261 [2024-07-25 19:18:32.638488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.261 [2024-07-25 19:18:32.638519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.261 qpair failed and we were unable to recover it. 00:27:40.261 [2024-07-25 19:18:32.648230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.261 [2024-07-25 19:18:32.648375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.261 [2024-07-25 19:18:32.648404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.261 [2024-07-25 19:18:32.648420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.261 [2024-07-25 19:18:32.648435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.261 [2024-07-25 19:18:32.648482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.261 qpair failed and we were unable to recover it. 00:27:40.261 [2024-07-25 19:18:32.658267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.261 [2024-07-25 19:18:32.658413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.261 [2024-07-25 19:18:32.658441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.261 [2024-07-25 19:18:32.658457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.261 [2024-07-25 19:18:32.658471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.261 [2024-07-25 19:18:32.658518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.261 qpair failed and we were unable to recover it. 00:27:40.261 [2024-07-25 19:18:32.668325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.261 [2024-07-25 19:18:32.668509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.261 [2024-07-25 19:18:32.668538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.261 [2024-07-25 19:18:32.668553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.261 [2024-07-25 19:18:32.668569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.261 [2024-07-25 19:18:32.668600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.261 qpair failed and we were unable to recover it. 00:27:40.261 [2024-07-25 19:18:32.678310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.261 [2024-07-25 19:18:32.678453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.261 [2024-07-25 19:18:32.678481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.261 [2024-07-25 19:18:32.678497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.261 [2024-07-25 19:18:32.678512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.261 [2024-07-25 19:18:32.678544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.261 qpair failed and we were unable to recover it. 00:27:40.261 [2024-07-25 19:18:32.688459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.261 [2024-07-25 19:18:32.688622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.261 [2024-07-25 19:18:32.688651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.261 [2024-07-25 19:18:32.688667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.261 [2024-07-25 19:18:32.688697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.261 [2024-07-25 19:18:32.688727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.261 qpair failed and we were unable to recover it. 00:27:40.261 [2024-07-25 19:18:32.698418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.261 [2024-07-25 19:18:32.698573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.261 [2024-07-25 19:18:32.698600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.261 [2024-07-25 19:18:32.698616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.261 [2024-07-25 19:18:32.698646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.261 [2024-07-25 19:18:32.698677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.261 qpair failed and we were unable to recover it. 00:27:40.261 [2024-07-25 19:18:32.708454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.261 [2024-07-25 19:18:32.708622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.261 [2024-07-25 19:18:32.708650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.261 [2024-07-25 19:18:32.708666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.261 [2024-07-25 19:18:32.708695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.261 [2024-07-25 19:18:32.708726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.261 qpair failed and we were unable to recover it. 00:27:40.261 [2024-07-25 19:18:32.718525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.261 [2024-07-25 19:18:32.718683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.261 [2024-07-25 19:18:32.718711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.261 [2024-07-25 19:18:32.718728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.261 [2024-07-25 19:18:32.718743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.261 [2024-07-25 19:18:32.718788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.261 qpair failed and we were unable to recover it. 00:27:40.261 [2024-07-25 19:18:32.728481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.261 [2024-07-25 19:18:32.728624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.261 [2024-07-25 19:18:32.728658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.261 [2024-07-25 19:18:32.728674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.261 [2024-07-25 19:18:32.728690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.261 [2024-07-25 19:18:32.728721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.261 qpair failed and we were unable to recover it. 00:27:40.520 [2024-07-25 19:18:32.738495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.520 [2024-07-25 19:18:32.738644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.520 [2024-07-25 19:18:32.738672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.520 [2024-07-25 19:18:32.738687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.520 [2024-07-25 19:18:32.738702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.520 [2024-07-25 19:18:32.738733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.520 qpair failed and we were unable to recover it. 00:27:40.520 [2024-07-25 19:18:32.748562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.520 [2024-07-25 19:18:32.748745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.520 [2024-07-25 19:18:32.748773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.520 [2024-07-25 19:18:32.748803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.520 [2024-07-25 19:18:32.748819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.520 [2024-07-25 19:18:32.748851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.520 qpair failed and we were unable to recover it. 00:27:40.520 [2024-07-25 19:18:32.758625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.520 [2024-07-25 19:18:32.758779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.520 [2024-07-25 19:18:32.758807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.520 [2024-07-25 19:18:32.758823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.520 [2024-07-25 19:18:32.758836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.520 [2024-07-25 19:18:32.758868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.520 qpair failed and we were unable to recover it. 00:27:40.520 [2024-07-25 19:18:32.768633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.520 [2024-07-25 19:18:32.768822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.520 [2024-07-25 19:18:32.768865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.520 [2024-07-25 19:18:32.768881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.520 [2024-07-25 19:18:32.768895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.520 [2024-07-25 19:18:32.768948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.520 qpair failed and we were unable to recover it. 00:27:40.520 [2024-07-25 19:18:32.778691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.520 [2024-07-25 19:18:32.778849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.520 [2024-07-25 19:18:32.778878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.520 [2024-07-25 19:18:32.778897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.520 [2024-07-25 19:18:32.778926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.520 [2024-07-25 19:18:32.778958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.520 qpair failed and we were unable to recover it. 00:27:40.520 [2024-07-25 19:18:32.788677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.520 [2024-07-25 19:18:32.788830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.520 [2024-07-25 19:18:32.788858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.520 [2024-07-25 19:18:32.788874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.520 [2024-07-25 19:18:32.788889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.520 [2024-07-25 19:18:32.788920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.520 qpair failed and we were unable to recover it. 00:27:40.520 [2024-07-25 19:18:32.798684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.520 [2024-07-25 19:18:32.798833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.520 [2024-07-25 19:18:32.798860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.520 [2024-07-25 19:18:32.798876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.520 [2024-07-25 19:18:32.798890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.520 [2024-07-25 19:18:32.798937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.520 qpair failed and we were unable to recover it. 00:27:40.520 [2024-07-25 19:18:32.808742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.520 [2024-07-25 19:18:32.808889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.520 [2024-07-25 19:18:32.808916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.520 [2024-07-25 19:18:32.808932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.520 [2024-07-25 19:18:32.808947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.520 [2024-07-25 19:18:32.808993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.520 qpair failed and we were unable to recover it. 00:27:40.520 [2024-07-25 19:18:32.818722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.520 [2024-07-25 19:18:32.818872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.520 [2024-07-25 19:18:32.818905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.520 [2024-07-25 19:18:32.818921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.520 [2024-07-25 19:18:32.818935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.520 [2024-07-25 19:18:32.818966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.520 qpair failed and we were unable to recover it. 00:27:40.520 [2024-07-25 19:18:32.828765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.520 [2024-07-25 19:18:32.828925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.520 [2024-07-25 19:18:32.828953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.520 [2024-07-25 19:18:32.828969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.520 [2024-07-25 19:18:32.828982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.520 [2024-07-25 19:18:32.829013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.520 qpair failed and we were unable to recover it. 00:27:40.520 [2024-07-25 19:18:32.838763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.520 [2024-07-25 19:18:32.838905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.520 [2024-07-25 19:18:32.838933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.520 [2024-07-25 19:18:32.838948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.521 [2024-07-25 19:18:32.838961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.521 [2024-07-25 19:18:32.838993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.521 qpair failed and we were unable to recover it. 00:27:40.521 [2024-07-25 19:18:32.848784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.521 [2024-07-25 19:18:32.848926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.521 [2024-07-25 19:18:32.848953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.521 [2024-07-25 19:18:32.848969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.521 [2024-07-25 19:18:32.848983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.521 [2024-07-25 19:18:32.849014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.521 qpair failed and we were unable to recover it. 00:27:40.521 [2024-07-25 19:18:32.858861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.521 [2024-07-25 19:18:32.859060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.521 [2024-07-25 19:18:32.859087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.521 [2024-07-25 19:18:32.859111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.521 [2024-07-25 19:18:32.859132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.521 [2024-07-25 19:18:32.859165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.521 qpair failed and we were unable to recover it. 00:27:40.521 [2024-07-25 19:18:32.868868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.521 [2024-07-25 19:18:32.869018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.521 [2024-07-25 19:18:32.869045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.521 [2024-07-25 19:18:32.869061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.521 [2024-07-25 19:18:32.869075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.521 [2024-07-25 19:18:32.869114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.521 qpair failed and we were unable to recover it. 00:27:40.521 [2024-07-25 19:18:32.878958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.521 [2024-07-25 19:18:32.879163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.521 [2024-07-25 19:18:32.879191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.521 [2024-07-25 19:18:32.879206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.521 [2024-07-25 19:18:32.879221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.521 [2024-07-25 19:18:32.879252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.521 qpair failed and we were unable to recover it. 00:27:40.521 [2024-07-25 19:18:32.888902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.521 [2024-07-25 19:18:32.889039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.521 [2024-07-25 19:18:32.889066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.521 [2024-07-25 19:18:32.889082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.521 [2024-07-25 19:18:32.889097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.521 [2024-07-25 19:18:32.889137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.521 qpair failed and we were unable to recover it. 00:27:40.521 [2024-07-25 19:18:32.898999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.521 [2024-07-25 19:18:32.899177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.521 [2024-07-25 19:18:32.899204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.521 [2024-07-25 19:18:32.899220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.521 [2024-07-25 19:18:32.899234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.521 [2024-07-25 19:18:32.899265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.521 qpair failed and we were unable to recover it. 00:27:40.521 [2024-07-25 19:18:32.908990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.521 [2024-07-25 19:18:32.909151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.521 [2024-07-25 19:18:32.909178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.521 [2024-07-25 19:18:32.909194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.521 [2024-07-25 19:18:32.909207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.521 [2024-07-25 19:18:32.909238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.521 qpair failed and we were unable to recover it. 00:27:40.521 [2024-07-25 19:18:32.919021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.521 [2024-07-25 19:18:32.919177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.521 [2024-07-25 19:18:32.919206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.521 [2024-07-25 19:18:32.919222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.521 [2024-07-25 19:18:32.919235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.521 [2024-07-25 19:18:32.919281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.521 qpair failed and we were unable to recover it. 00:27:40.521 [2024-07-25 19:18:32.929129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.521 [2024-07-25 19:18:32.929280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.521 [2024-07-25 19:18:32.929309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.521 [2024-07-25 19:18:32.929329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.521 [2024-07-25 19:18:32.929345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.521 [2024-07-25 19:18:32.929377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.521 qpair failed and we were unable to recover it. 00:27:40.521 [2024-07-25 19:18:32.939058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.521 [2024-07-25 19:18:32.939229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.521 [2024-07-25 19:18:32.939256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.521 [2024-07-25 19:18:32.939273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.521 [2024-07-25 19:18:32.939288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.521 [2024-07-25 19:18:32.939319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.521 qpair failed and we were unable to recover it. 00:27:40.521 [2024-07-25 19:18:32.949138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.521 [2024-07-25 19:18:32.949325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.521 [2024-07-25 19:18:32.949352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.521 [2024-07-25 19:18:32.949375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.521 [2024-07-25 19:18:32.949404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.521 [2024-07-25 19:18:32.949435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.521 qpair failed and we were unable to recover it. 00:27:40.521 [2024-07-25 19:18:32.959155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.521 [2024-07-25 19:18:32.959351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.521 [2024-07-25 19:18:32.959378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.521 [2024-07-25 19:18:32.959394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.521 [2024-07-25 19:18:32.959424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.521 [2024-07-25 19:18:32.959454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.521 qpair failed and we were unable to recover it. 00:27:40.521 [2024-07-25 19:18:32.969152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.521 [2024-07-25 19:18:32.969299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.521 [2024-07-25 19:18:32.969327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.521 [2024-07-25 19:18:32.969343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.522 [2024-07-25 19:18:32.969358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.522 [2024-07-25 19:18:32.969392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.522 qpair failed and we were unable to recover it. 00:27:40.522 [2024-07-25 19:18:32.979211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.522 [2024-07-25 19:18:32.979395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.522 [2024-07-25 19:18:32.979438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.522 [2024-07-25 19:18:32.979454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.522 [2024-07-25 19:18:32.979468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.522 [2024-07-25 19:18:32.979516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.522 qpair failed and we were unable to recover it. 00:27:40.522 [2024-07-25 19:18:32.989206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.522 [2024-07-25 19:18:32.989363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.522 [2024-07-25 19:18:32.989400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.522 [2024-07-25 19:18:32.989416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.522 [2024-07-25 19:18:32.989430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.522 [2024-07-25 19:18:32.989461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.522 qpair failed and we were unable to recover it. 00:27:40.780 [2024-07-25 19:18:32.999232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.780 [2024-07-25 19:18:32.999393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.780 [2024-07-25 19:18:32.999421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.780 [2024-07-25 19:18:32.999437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.780 [2024-07-25 19:18:32.999451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.780 [2024-07-25 19:18:32.999486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.780 qpair failed and we were unable to recover it. 00:27:40.780 [2024-07-25 19:18:33.009258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.780 [2024-07-25 19:18:33.009408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.780 [2024-07-25 19:18:33.009436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.780 [2024-07-25 19:18:33.009452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.780 [2024-07-25 19:18:33.009467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.780 [2024-07-25 19:18:33.009513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.780 qpair failed and we were unable to recover it. 00:27:40.780 [2024-07-25 19:18:33.019302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.780 [2024-07-25 19:18:33.019459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.780 [2024-07-25 19:18:33.019487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.780 [2024-07-25 19:18:33.019503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.780 [2024-07-25 19:18:33.019518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.780 [2024-07-25 19:18:33.019549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.780 qpair failed and we were unable to recover it. 00:27:40.780 [2024-07-25 19:18:33.029322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.780 [2024-07-25 19:18:33.029477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.780 [2024-07-25 19:18:33.029505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.780 [2024-07-25 19:18:33.029522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.780 [2024-07-25 19:18:33.029536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.780 [2024-07-25 19:18:33.029569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.780 qpair failed and we were unable to recover it. 00:27:40.780 [2024-07-25 19:18:33.039333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.780 [2024-07-25 19:18:33.039482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.780 [2024-07-25 19:18:33.039508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.780 [2024-07-25 19:18:33.039531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.780 [2024-07-25 19:18:33.039546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.780 [2024-07-25 19:18:33.039578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.780 qpair failed and we were unable to recover it. 00:27:40.780 [2024-07-25 19:18:33.049450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.780 [2024-07-25 19:18:33.049606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.780 [2024-07-25 19:18:33.049633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.780 [2024-07-25 19:18:33.049648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.780 [2024-07-25 19:18:33.049663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.780 [2024-07-25 19:18:33.049696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.780 qpair failed and we were unable to recover it. 00:27:40.781 [2024-07-25 19:18:33.059400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.781 [2024-07-25 19:18:33.059557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.781 [2024-07-25 19:18:33.059585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.781 [2024-07-25 19:18:33.059601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.781 [2024-07-25 19:18:33.059629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.781 [2024-07-25 19:18:33.059661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.781 qpair failed and we were unable to recover it. 00:27:40.781 [2024-07-25 19:18:33.069473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.781 [2024-07-25 19:18:33.069634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.781 [2024-07-25 19:18:33.069661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.781 [2024-07-25 19:18:33.069677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.781 [2024-07-25 19:18:33.069706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.781 [2024-07-25 19:18:33.069736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.781 qpair failed and we were unable to recover it. 00:27:40.781 [2024-07-25 19:18:33.079461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.781 [2024-07-25 19:18:33.079618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.781 [2024-07-25 19:18:33.079645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.781 [2024-07-25 19:18:33.079661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.781 [2024-07-25 19:18:33.079675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.781 [2024-07-25 19:18:33.079706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.781 qpair failed and we were unable to recover it. 00:27:40.781 [2024-07-25 19:18:33.089503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.781 [2024-07-25 19:18:33.089665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.781 [2024-07-25 19:18:33.089692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.781 [2024-07-25 19:18:33.089709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.781 [2024-07-25 19:18:33.089722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.781 [2024-07-25 19:18:33.089754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.781 qpair failed and we were unable to recover it. 00:27:40.781 [2024-07-25 19:18:33.099532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.781 [2024-07-25 19:18:33.099691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.781 [2024-07-25 19:18:33.099718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.781 [2024-07-25 19:18:33.099734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.781 [2024-07-25 19:18:33.099748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.781 [2024-07-25 19:18:33.099795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.781 qpair failed and we were unable to recover it. 00:27:40.781 [2024-07-25 19:18:33.109526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.781 [2024-07-25 19:18:33.109675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.781 [2024-07-25 19:18:33.109701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.781 [2024-07-25 19:18:33.109717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.781 [2024-07-25 19:18:33.109732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.781 [2024-07-25 19:18:33.109763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.781 qpair failed and we were unable to recover it. 00:27:40.781 [2024-07-25 19:18:33.119554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.781 [2024-07-25 19:18:33.119698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.781 [2024-07-25 19:18:33.119725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.781 [2024-07-25 19:18:33.119740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.781 [2024-07-25 19:18:33.119755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.781 [2024-07-25 19:18:33.119786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.781 qpair failed and we were unable to recover it. 00:27:40.781 [2024-07-25 19:18:33.129638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.781 [2024-07-25 19:18:33.129788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.781 [2024-07-25 19:18:33.129820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.781 [2024-07-25 19:18:33.129837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.781 [2024-07-25 19:18:33.129867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.781 [2024-07-25 19:18:33.129899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.781 qpair failed and we were unable to recover it. 00:27:40.781 [2024-07-25 19:18:33.139648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.781 [2024-07-25 19:18:33.139794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.781 [2024-07-25 19:18:33.139820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.781 [2024-07-25 19:18:33.139836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.781 [2024-07-25 19:18:33.139851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.781 [2024-07-25 19:18:33.139881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.781 qpair failed and we were unable to recover it. 00:27:40.781 [2024-07-25 19:18:33.149727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.781 [2024-07-25 19:18:33.149877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.781 [2024-07-25 19:18:33.149903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.781 [2024-07-25 19:18:33.149919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.781 [2024-07-25 19:18:33.149933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.781 [2024-07-25 19:18:33.149979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.781 qpair failed and we were unable to recover it. 00:27:40.781 [2024-07-25 19:18:33.159696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.781 [2024-07-25 19:18:33.159884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.781 [2024-07-25 19:18:33.159926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.781 [2024-07-25 19:18:33.159942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.781 [2024-07-25 19:18:33.159956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.781 [2024-07-25 19:18:33.160016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.781 qpair failed and we were unable to recover it. 00:27:40.781 [2024-07-25 19:18:33.169733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.781 [2024-07-25 19:18:33.169916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.781 [2024-07-25 19:18:33.169957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.781 [2024-07-25 19:18:33.169974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.781 [2024-07-25 19:18:33.169988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.781 [2024-07-25 19:18:33.170039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.781 qpair failed and we were unable to recover it. 00:27:40.781 [2024-07-25 19:18:33.179786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.781 [2024-07-25 19:18:33.179961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.781 [2024-07-25 19:18:33.179988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.781 [2024-07-25 19:18:33.180020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.781 [2024-07-25 19:18:33.180035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.781 [2024-07-25 19:18:33.180066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.781 qpair failed and we were unable to recover it. 00:27:40.781 [2024-07-25 19:18:33.189775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.782 [2024-07-25 19:18:33.189931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.782 [2024-07-25 19:18:33.189958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.782 [2024-07-25 19:18:33.189974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.782 [2024-07-25 19:18:33.189988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.782 [2024-07-25 19:18:33.190019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.782 qpair failed and we were unable to recover it. 00:27:40.782 [2024-07-25 19:18:33.199795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.782 [2024-07-25 19:18:33.199947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.782 [2024-07-25 19:18:33.199974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.782 [2024-07-25 19:18:33.199990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.782 [2024-07-25 19:18:33.200004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.782 [2024-07-25 19:18:33.200051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.782 qpair failed and we were unable to recover it. 00:27:40.782 [2024-07-25 19:18:33.209806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.782 [2024-07-25 19:18:33.209997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.782 [2024-07-25 19:18:33.210023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.782 [2024-07-25 19:18:33.210039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.782 [2024-07-25 19:18:33.210068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.782 [2024-07-25 19:18:33.210099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.782 qpair failed and we were unable to recover it. 00:27:40.782 [2024-07-25 19:18:33.219883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.782 [2024-07-25 19:18:33.220088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.782 [2024-07-25 19:18:33.220145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.782 [2024-07-25 19:18:33.220164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.782 [2024-07-25 19:18:33.220193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.782 [2024-07-25 19:18:33.220228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.782 qpair failed and we were unable to recover it. 00:27:40.782 [2024-07-25 19:18:33.229917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.782 [2024-07-25 19:18:33.230067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.782 [2024-07-25 19:18:33.230095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.782 [2024-07-25 19:18:33.230120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.782 [2024-07-25 19:18:33.230136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.782 [2024-07-25 19:18:33.230168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.782 qpair failed and we were unable to recover it. 00:27:40.782 [2024-07-25 19:18:33.239891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.782 [2024-07-25 19:18:33.240039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.782 [2024-07-25 19:18:33.240065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.782 [2024-07-25 19:18:33.240081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.782 [2024-07-25 19:18:33.240096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.782 [2024-07-25 19:18:33.240134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.782 qpair failed and we were unable to recover it. 00:27:40.782 [2024-07-25 19:18:33.249959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.782 [2024-07-25 19:18:33.250114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.782 [2024-07-25 19:18:33.250142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.782 [2024-07-25 19:18:33.250158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.782 [2024-07-25 19:18:33.250173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:40.782 [2024-07-25 19:18:33.250204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.782 qpair failed and we were unable to recover it. 00:27:41.041 [2024-07-25 19:18:33.259970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.041 [2024-07-25 19:18:33.260157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.041 [2024-07-25 19:18:33.260184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.041 [2024-07-25 19:18:33.260203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.042 [2024-07-25 19:18:33.260224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.042 [2024-07-25 19:18:33.260257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.042 qpair failed and we were unable to recover it. 00:27:41.042 [2024-07-25 19:18:33.269988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.042 [2024-07-25 19:18:33.270148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.042 [2024-07-25 19:18:33.270175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.042 [2024-07-25 19:18:33.270191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.042 [2024-07-25 19:18:33.270205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.042 [2024-07-25 19:18:33.270250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.042 qpair failed and we were unable to recover it. 00:27:41.042 [2024-07-25 19:18:33.280001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.042 [2024-07-25 19:18:33.280194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.042 [2024-07-25 19:18:33.280222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.042 [2024-07-25 19:18:33.280238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.042 [2024-07-25 19:18:33.280252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.042 [2024-07-25 19:18:33.280283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.042 qpair failed and we were unable to recover it. 00:27:41.042 [2024-07-25 19:18:33.290111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.042 [2024-07-25 19:18:33.290302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.042 [2024-07-25 19:18:33.290329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.042 [2024-07-25 19:18:33.290345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.042 [2024-07-25 19:18:33.290359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.042 [2024-07-25 19:18:33.290390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.042 qpair failed and we were unable to recover it. 00:27:41.042 [2024-07-25 19:18:33.300112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.042 [2024-07-25 19:18:33.300277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.042 [2024-07-25 19:18:33.300304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.042 [2024-07-25 19:18:33.300320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.042 [2024-07-25 19:18:33.300334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.042 [2024-07-25 19:18:33.300365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.042 qpair failed and we were unable to recover it. 00:27:41.042 [2024-07-25 19:18:33.310153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.042 [2024-07-25 19:18:33.310312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.042 [2024-07-25 19:18:33.310338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.042 [2024-07-25 19:18:33.310355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.042 [2024-07-25 19:18:33.310369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.042 [2024-07-25 19:18:33.310401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.042 qpair failed and we were unable to recover it. 00:27:41.042 [2024-07-25 19:18:33.320201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.042 [2024-07-25 19:18:33.320361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.042 [2024-07-25 19:18:33.320388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.042 [2024-07-25 19:18:33.320404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.042 [2024-07-25 19:18:33.320434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.042 [2024-07-25 19:18:33.320464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.042 qpair failed and we were unable to recover it. 00:27:41.042 [2024-07-25 19:18:33.330234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.042 [2024-07-25 19:18:33.330392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.042 [2024-07-25 19:18:33.330419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.042 [2024-07-25 19:18:33.330435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.042 [2024-07-25 19:18:33.330450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.042 [2024-07-25 19:18:33.330496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.042 qpair failed and we were unable to recover it. 00:27:41.042 [2024-07-25 19:18:33.340207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.042 [2024-07-25 19:18:33.340402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.042 [2024-07-25 19:18:33.340443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.042 [2024-07-25 19:18:33.340459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.042 [2024-07-25 19:18:33.340473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.042 [2024-07-25 19:18:33.340517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.042 qpair failed and we were unable to recover it. 00:27:41.042 [2024-07-25 19:18:33.350254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.042 [2024-07-25 19:18:33.350408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.042 [2024-07-25 19:18:33.350435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.042 [2024-07-25 19:18:33.350451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.042 [2024-07-25 19:18:33.350471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.042 [2024-07-25 19:18:33.350503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.042 qpair failed and we were unable to recover it. 00:27:41.042 [2024-07-25 19:18:33.360325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.042 [2024-07-25 19:18:33.360480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.042 [2024-07-25 19:18:33.360507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.042 [2024-07-25 19:18:33.360523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.042 [2024-07-25 19:18:33.360537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.042 [2024-07-25 19:18:33.360582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.042 qpair failed and we were unable to recover it. 00:27:41.042 [2024-07-25 19:18:33.370310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.042 [2024-07-25 19:18:33.370470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.042 [2024-07-25 19:18:33.370497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.042 [2024-07-25 19:18:33.370513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.042 [2024-07-25 19:18:33.370545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.042 [2024-07-25 19:18:33.370575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.042 qpair failed and we were unable to recover it. 00:27:41.042 [2024-07-25 19:18:33.380368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.042 [2024-07-25 19:18:33.380548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.042 [2024-07-25 19:18:33.380590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.042 [2024-07-25 19:18:33.380606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.042 [2024-07-25 19:18:33.380619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.042 [2024-07-25 19:18:33.380650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.042 qpair failed and we were unable to recover it. 00:27:41.042 [2024-07-25 19:18:33.390345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.042 [2024-07-25 19:18:33.390501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.042 [2024-07-25 19:18:33.390527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.042 [2024-07-25 19:18:33.390543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.042 [2024-07-25 19:18:33.390557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.042 [2024-07-25 19:18:33.390588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.042 qpair failed and we were unable to recover it. 00:27:41.042 [2024-07-25 19:18:33.400427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.042 [2024-07-25 19:18:33.400585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.042 [2024-07-25 19:18:33.400614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.042 [2024-07-25 19:18:33.400647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.042 [2024-07-25 19:18:33.400662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.042 [2024-07-25 19:18:33.400693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.042 qpair failed and we were unable to recover it. 00:27:41.042 [2024-07-25 19:18:33.410400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.042 [2024-07-25 19:18:33.410555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.042 [2024-07-25 19:18:33.410582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.042 [2024-07-25 19:18:33.410598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.042 [2024-07-25 19:18:33.410613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.042 [2024-07-25 19:18:33.410645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.042 qpair failed and we were unable to recover it. 00:27:41.042 [2024-07-25 19:18:33.420425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.042 [2024-07-25 19:18:33.420578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.042 [2024-07-25 19:18:33.420605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.042 [2024-07-25 19:18:33.420621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.042 [2024-07-25 19:18:33.420635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.042 [2024-07-25 19:18:33.420666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.042 qpair failed and we were unable to recover it. 00:27:41.042 [2024-07-25 19:18:33.430456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.042 [2024-07-25 19:18:33.430646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.042 [2024-07-25 19:18:33.430691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.042 [2024-07-25 19:18:33.430709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.043 [2024-07-25 19:18:33.430723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.043 [2024-07-25 19:18:33.430768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.043 qpair failed and we were unable to recover it. 00:27:41.043 [2024-07-25 19:18:33.440473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.043 [2024-07-25 19:18:33.440612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.043 [2024-07-25 19:18:33.440640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.043 [2024-07-25 19:18:33.440665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.043 [2024-07-25 19:18:33.440680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.043 [2024-07-25 19:18:33.440712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.043 qpair failed and we were unable to recover it. 00:27:41.043 [2024-07-25 19:18:33.450578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.043 [2024-07-25 19:18:33.450756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.043 [2024-07-25 19:18:33.450785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.043 [2024-07-25 19:18:33.450816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.043 [2024-07-25 19:18:33.450829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.043 [2024-07-25 19:18:33.450875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.043 qpair failed and we were unable to recover it. 00:27:41.043 [2024-07-25 19:18:33.460551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.043 [2024-07-25 19:18:33.460711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.043 [2024-07-25 19:18:33.460739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.043 [2024-07-25 19:18:33.460755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.043 [2024-07-25 19:18:33.460769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.043 [2024-07-25 19:18:33.460800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.043 qpair failed and we were unable to recover it. 00:27:41.043 [2024-07-25 19:18:33.470626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.043 [2024-07-25 19:18:33.470786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.043 [2024-07-25 19:18:33.470814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.043 [2024-07-25 19:18:33.470831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.043 [2024-07-25 19:18:33.470845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.043 [2024-07-25 19:18:33.470903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.043 qpair failed and we were unable to recover it. 00:27:41.043 [2024-07-25 19:18:33.480595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.043 [2024-07-25 19:18:33.480740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.043 [2024-07-25 19:18:33.480769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.043 [2024-07-25 19:18:33.480785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.043 [2024-07-25 19:18:33.480799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.043 [2024-07-25 19:18:33.480829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.043 qpair failed and we were unable to recover it. 00:27:41.043 [2024-07-25 19:18:33.490676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.043 [2024-07-25 19:18:33.490828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.043 [2024-07-25 19:18:33.490857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.043 [2024-07-25 19:18:33.490873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.043 [2024-07-25 19:18:33.490887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.043 [2024-07-25 19:18:33.490932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.043 qpair failed and we were unable to recover it. 00:27:41.043 [2024-07-25 19:18:33.500726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.043 [2024-07-25 19:18:33.500880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.043 [2024-07-25 19:18:33.500919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.043 [2024-07-25 19:18:33.500935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.043 [2024-07-25 19:18:33.500948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.043 [2024-07-25 19:18:33.500994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.043 qpair failed and we were unable to recover it. 00:27:41.043 [2024-07-25 19:18:33.510714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.043 [2024-07-25 19:18:33.510886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.043 [2024-07-25 19:18:33.510914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.043 [2024-07-25 19:18:33.510931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.043 [2024-07-25 19:18:33.510945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.043 [2024-07-25 19:18:33.510977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.043 qpair failed and we were unable to recover it. 00:27:41.301 [2024-07-25 19:18:33.520739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.301 [2024-07-25 19:18:33.520886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.301 [2024-07-25 19:18:33.520914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.301 [2024-07-25 19:18:33.520930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.301 [2024-07-25 19:18:33.520943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.301 [2024-07-25 19:18:33.520974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.301 qpair failed and we were unable to recover it. 00:27:41.301 [2024-07-25 19:18:33.530761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.301 [2024-07-25 19:18:33.530929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.301 [2024-07-25 19:18:33.530963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.301 [2024-07-25 19:18:33.530980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.301 [2024-07-25 19:18:33.531008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.301 [2024-07-25 19:18:33.531041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.301 qpair failed and we were unable to recover it. 00:27:41.301 [2024-07-25 19:18:33.540781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.301 [2024-07-25 19:18:33.540929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.301 [2024-07-25 19:18:33.540957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.301 [2024-07-25 19:18:33.540973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.301 [2024-07-25 19:18:33.540987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.301 [2024-07-25 19:18:33.541018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.301 qpair failed and we were unable to recover it. 00:27:41.301 [2024-07-25 19:18:33.550796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.301 [2024-07-25 19:18:33.550942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.301 [2024-07-25 19:18:33.550969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.301 [2024-07-25 19:18:33.550986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.301 [2024-07-25 19:18:33.551000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.301 [2024-07-25 19:18:33.551031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.301 qpair failed and we were unable to recover it. 00:27:41.301 [2024-07-25 19:18:33.560821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.301 [2024-07-25 19:18:33.560959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.301 [2024-07-25 19:18:33.560985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.301 [2024-07-25 19:18:33.561001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.301 [2024-07-25 19:18:33.561014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.301 [2024-07-25 19:18:33.561044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.301 qpair failed and we were unable to recover it. 00:27:41.301 [2024-07-25 19:18:33.570846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.301 [2024-07-25 19:18:33.570989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.301 [2024-07-25 19:18:33.571017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.301 [2024-07-25 19:18:33.571033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.301 [2024-07-25 19:18:33.571047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.301 [2024-07-25 19:18:33.571083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.301 qpair failed and we were unable to recover it. 00:27:41.301 [2024-07-25 19:18:33.580881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.301 [2024-07-25 19:18:33.581034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.301 [2024-07-25 19:18:33.581062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.301 [2024-07-25 19:18:33.581078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.301 [2024-07-25 19:18:33.581092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.302 [2024-07-25 19:18:33.581130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.302 qpair failed and we were unable to recover it. 00:27:41.302 [2024-07-25 19:18:33.590902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.302 [2024-07-25 19:18:33.591057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.302 [2024-07-25 19:18:33.591085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.302 [2024-07-25 19:18:33.591109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.302 [2024-07-25 19:18:33.591126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.302 [2024-07-25 19:18:33.591157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.302 qpair failed and we were unable to recover it. 00:27:41.302 [2024-07-25 19:18:33.600936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.302 [2024-07-25 19:18:33.601131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.302 [2024-07-25 19:18:33.601160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.302 [2024-07-25 19:18:33.601177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.302 [2024-07-25 19:18:33.601190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.302 [2024-07-25 19:18:33.601221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.302 qpair failed and we were unable to recover it. 00:27:41.302 [2024-07-25 19:18:33.610967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.302 [2024-07-25 19:18:33.611120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.302 [2024-07-25 19:18:33.611147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.302 [2024-07-25 19:18:33.611163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.302 [2024-07-25 19:18:33.611178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.302 [2024-07-25 19:18:33.611208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.302 qpair failed and we were unable to recover it. 00:27:41.302 [2024-07-25 19:18:33.620996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.302 [2024-07-25 19:18:33.621194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.302 [2024-07-25 19:18:33.621226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.302 [2024-07-25 19:18:33.621243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.302 [2024-07-25 19:18:33.621258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.302 [2024-07-25 19:18:33.621289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.302 qpair failed and we were unable to recover it. 00:27:41.302 [2024-07-25 19:18:33.631010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.302 [2024-07-25 19:18:33.631159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.302 [2024-07-25 19:18:33.631186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.302 [2024-07-25 19:18:33.631202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.302 [2024-07-25 19:18:33.631217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.302 [2024-07-25 19:18:33.631249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.302 qpair failed and we were unable to recover it. 00:27:41.302 [2024-07-25 19:18:33.641081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.302 [2024-07-25 19:18:33.641237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.302 [2024-07-25 19:18:33.641266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.302 [2024-07-25 19:18:33.641282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.302 [2024-07-25 19:18:33.641296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.302 [2024-07-25 19:18:33.641327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.302 qpair failed and we were unable to recover it. 00:27:41.302 [2024-07-25 19:18:33.651073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.302 [2024-07-25 19:18:33.651219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.302 [2024-07-25 19:18:33.651248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.302 [2024-07-25 19:18:33.651263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.302 [2024-07-25 19:18:33.651277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.302 [2024-07-25 19:18:33.651308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.302 qpair failed and we were unable to recover it. 00:27:41.302 [2024-07-25 19:18:33.661113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.302 [2024-07-25 19:18:33.661309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.302 [2024-07-25 19:18:33.661337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.302 [2024-07-25 19:18:33.661353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.302 [2024-07-25 19:18:33.661372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.302 [2024-07-25 19:18:33.661405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.302 qpair failed and we were unable to recover it. 00:27:41.302 [2024-07-25 19:18:33.671158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.302 [2024-07-25 19:18:33.671306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.302 [2024-07-25 19:18:33.671335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.302 [2024-07-25 19:18:33.671351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.302 [2024-07-25 19:18:33.671364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.302 [2024-07-25 19:18:33.671395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.302 qpair failed and we were unable to recover it. 00:27:41.302 [2024-07-25 19:18:33.681178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.302 [2024-07-25 19:18:33.681326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.302 [2024-07-25 19:18:33.681355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.302 [2024-07-25 19:18:33.681372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.302 [2024-07-25 19:18:33.681386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.302 [2024-07-25 19:18:33.681430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.302 qpair failed and we were unable to recover it. 00:27:41.302 [2024-07-25 19:18:33.691218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.302 [2024-07-25 19:18:33.691359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.302 [2024-07-25 19:18:33.691387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.302 [2024-07-25 19:18:33.691403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.302 [2024-07-25 19:18:33.691417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.302 [2024-07-25 19:18:33.691462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.302 qpair failed and we were unable to recover it. 00:27:41.302 [2024-07-25 19:18:33.701240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.302 [2024-07-25 19:18:33.701387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.302 [2024-07-25 19:18:33.701414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.302 [2024-07-25 19:18:33.701430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.302 [2024-07-25 19:18:33.701443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.302 [2024-07-25 19:18:33.701490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.302 qpair failed and we were unable to recover it. 00:27:41.303 [2024-07-25 19:18:33.711271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.303 [2024-07-25 19:18:33.711425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.303 [2024-07-25 19:18:33.711454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.303 [2024-07-25 19:18:33.711470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.303 [2024-07-25 19:18:33.711484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.303 [2024-07-25 19:18:33.711529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.303 qpair failed and we were unable to recover it. 00:27:41.303 [2024-07-25 19:18:33.721281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.303 [2024-07-25 19:18:33.721427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.303 [2024-07-25 19:18:33.721455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.303 [2024-07-25 19:18:33.721471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.303 [2024-07-25 19:18:33.721485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.303 [2024-07-25 19:18:33.721515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.303 qpair failed and we were unable to recover it. 00:27:41.303 [2024-07-25 19:18:33.731418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.303 [2024-07-25 19:18:33.731581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.303 [2024-07-25 19:18:33.731610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.303 [2024-07-25 19:18:33.731627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.303 [2024-07-25 19:18:33.731656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.303 [2024-07-25 19:18:33.731688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.303 qpair failed and we were unable to recover it. 00:27:41.303 [2024-07-25 19:18:33.741385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.303 [2024-07-25 19:18:33.741572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.303 [2024-07-25 19:18:33.741617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.303 [2024-07-25 19:18:33.741633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.303 [2024-07-25 19:18:33.741647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.303 [2024-07-25 19:18:33.741692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.303 qpair failed and we were unable to recover it. 00:27:41.303 [2024-07-25 19:18:33.751369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.303 [2024-07-25 19:18:33.751559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.303 [2024-07-25 19:18:33.751587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.303 [2024-07-25 19:18:33.751604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.303 [2024-07-25 19:18:33.751624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.303 [2024-07-25 19:18:33.751656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.303 qpair failed and we were unable to recover it. 00:27:41.303 [2024-07-25 19:18:33.761417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.303 [2024-07-25 19:18:33.761562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.303 [2024-07-25 19:18:33.761590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.303 [2024-07-25 19:18:33.761606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.303 [2024-07-25 19:18:33.761620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.303 [2024-07-25 19:18:33.761651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.303 qpair failed and we were unable to recover it. 00:27:41.563 [2024-07-25 19:18:33.771461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.563 [2024-07-25 19:18:33.771630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.563 [2024-07-25 19:18:33.771659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.563 [2024-07-25 19:18:33.771675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.563 [2024-07-25 19:18:33.771689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.563 [2024-07-25 19:18:33.771723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.563 qpair failed and we were unable to recover it. 00:27:41.563 [2024-07-25 19:18:33.781481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.563 [2024-07-25 19:18:33.781685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.563 [2024-07-25 19:18:33.781715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.563 [2024-07-25 19:18:33.781731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.563 [2024-07-25 19:18:33.781744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63cc000b90 00:27:41.563 [2024-07-25 19:18:33.781775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:41.563 qpair failed and we were unable to recover it. 00:27:41.563 [2024-07-25 19:18:33.791462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.563 [2024-07-25 19:18:33.791612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.563 [2024-07-25 19:18:33.791646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.563 [2024-07-25 19:18:33.791663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.563 [2024-07-25 19:18:33.791676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:41.563 [2024-07-25 19:18:33.791707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.563 qpair failed and we were unable to recover it. 00:27:41.563 [2024-07-25 19:18:33.801521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.563 [2024-07-25 19:18:33.801674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.563 [2024-07-25 19:18:33.801702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.563 [2024-07-25 19:18:33.801718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.563 [2024-07-25 19:18:33.801732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11fd250 00:27:41.563 [2024-07-25 19:18:33.801761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.563 qpair failed and we were unable to recover it. 00:27:41.563 [2024-07-25 19:18:33.811592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.563 [2024-07-25 19:18:33.811736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.563 [2024-07-25 19:18:33.811771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.563 [2024-07-25 19:18:33.811789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.563 [2024-07-25 19:18:33.811803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63dc000b90 00:27:41.563 [2024-07-25 19:18:33.811836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.563 qpair failed and we were unable to recover it. 00:27:41.563 [2024-07-25 19:18:33.821594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.563 [2024-07-25 19:18:33.821743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.563 [2024-07-25 19:18:33.821773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.563 [2024-07-25 19:18:33.821789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.563 [2024-07-25 19:18:33.821803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63dc000b90 00:27:41.563 [2024-07-25 19:18:33.821833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.563 qpair failed and we were unable to recover it. 00:27:41.563 [2024-07-25 19:18:33.821963] nvme_ctrlr.c:4480:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:27:41.563 A controller has encountered a failure and is being reset. 00:27:41.563 qpair failed and we were unable to recover it. 00:27:41.563 Controller properly reset. 00:27:41.563 Initializing NVMe Controllers 00:27:41.563 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:41.563 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:41.563 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:41.563 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:41.563 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:41.563 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:41.563 Initialization complete. Launching workers. 00:27:41.563 Starting thread on core 1 00:27:41.563 Starting thread on core 2 00:27:41.563 Starting thread on core 3 00:27:41.563 Starting thread on core 0 00:27:41.563 19:18:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:41.563 00:27:41.563 real 0m11.546s 00:27:41.563 user 0m20.797s 00:27:41.563 sys 0m5.479s 00:27:41.563 19:18:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:41.563 19:18:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:41.563 ************************************ 00:27:41.563 END TEST nvmf_target_disconnect_tc2 00:27:41.563 ************************************ 00:27:41.564 19:18:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:41.564 19:18:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:41.564 19:18:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:41.564 19:18:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:41.564 19:18:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:27:41.564 19:18:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:41.564 19:18:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:27:41.564 19:18:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:41.564 19:18:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:41.564 rmmod nvme_tcp 00:27:41.564 rmmod nvme_fabrics 00:27:41.564 rmmod nvme_keyring 00:27:41.564 19:18:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:41.564 19:18:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:27:41.564 19:18:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:27:41.564 19:18:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1019581 ']' 00:27:41.564 19:18:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1019581 00:27:41.564 19:18:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1019581 ']' 00:27:41.564 19:18:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1019581 00:27:41.564 19:18:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:27:41.564 19:18:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:41.564 19:18:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1019581 00:27:41.564 19:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:27:41.564 19:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:27:41.564 19:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1019581' 00:27:41.564 killing process with pid 1019581 00:27:41.564 19:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1019581 00:27:41.564 19:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1019581 00:27:42.129 19:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:42.129 19:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:42.129 19:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:42.129 19:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:42.129 19:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:42.129 19:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.129 19:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:42.129 19:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.028 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:44.028 00:27:44.028 real 0m16.823s 00:27:44.028 user 0m47.373s 00:27:44.028 sys 0m7.854s 00:27:44.028 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:44.028 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:44.028 ************************************ 00:27:44.028 END TEST nvmf_target_disconnect 00:27:44.028 ************************************ 00:27:44.028 19:18:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:44.028 00:27:44.028 real 5m16.832s 00:27:44.028 user 11m5.064s 00:27:44.028 sys 1m16.917s 00:27:44.028 19:18:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:44.028 19:18:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.028 ************************************ 00:27:44.028 END TEST nvmf_host 00:27:44.028 ************************************ 00:27:44.028 00:27:44.028 real 20m18.527s 00:27:44.028 user 47m12.987s 00:27:44.028 sys 5m17.008s 00:27:44.028 19:18:36 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:44.028 19:18:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:44.028 ************************************ 00:27:44.028 END TEST nvmf_tcp 00:27:44.028 ************************************ 00:27:44.028 19:18:36 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:27:44.028 19:18:36 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:44.028 19:18:36 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:44.028 19:18:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:44.028 19:18:36 -- common/autotest_common.sh@10 -- # set +x 00:27:44.028 ************************************ 00:27:44.028 START TEST spdkcli_nvmf_tcp 00:27:44.028 ************************************ 00:27:44.028 19:18:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:44.028 * Looking for test storage... 00:27:44.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:27:44.028 19:18:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:27:44.028 19:18:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:44.028 19:18:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:27:44.028 19:18:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:44.028 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:27:44.287 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:44.288 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:44.288 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:44.288 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.288 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.288 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:44.288 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:44.288 19:18:36 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:44.288 19:18:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:44.288 19:18:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:44.288 19:18:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:44.288 19:18:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:44.288 19:18:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:44.288 19:18:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:44.288 19:18:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:44.288 19:18:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1020776 00:27:44.288 19:18:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:44.288 19:18:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1020776 00:27:44.288 19:18:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1020776 ']' 00:27:44.288 19:18:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.288 19:18:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:44.288 19:18:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:44.288 19:18:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:44.288 19:18:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:44.288 [2024-07-25 19:18:36.566019] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:44.288 [2024-07-25 19:18:36.566136] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1020776 ] 00:27:44.288 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.288 [2024-07-25 19:18:36.634202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:44.288 [2024-07-25 19:18:36.743661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.288 [2024-07-25 19:18:36.743666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.222 19:18:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:45.222 19:18:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:27:45.222 19:18:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:45.222 19:18:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:45.222 19:18:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:45.222 19:18:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:45.222 19:18:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:45.222 19:18:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:45.222 19:18:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:45.222 19:18:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:45.222 19:18:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:45.222 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:45.222 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:45.222 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:45.222 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:45.222 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:45.222 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:45.222 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:45.222 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:45.222 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:45.222 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:45.222 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:45.222 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:45.222 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:45.222 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:45.222 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:45.222 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:45.222 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:45.222 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:45.222 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:45.222 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:45.222 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:45.222 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:45.222 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:45.222 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:45.222 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:45.222 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:45.222 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:45.222 ' 00:27:47.750 [2024-07-25 19:18:40.079001] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:49.122 [2024-07-25 19:18:41.315253] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:51.650 [2024-07-25 19:18:43.602580] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:53.549 [2024-07-25 19:18:45.576732] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:54.921 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:54.921 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:54.921 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:54.921 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:54.921 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:54.921 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:54.921 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:54.921 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:54.921 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:54.921 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:54.921 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:54.921 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:54.921 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:54.921 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:54.921 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:54.921 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:54.921 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:54.921 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:54.921 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:54.922 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:54.922 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:54.922 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:54.922 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:54.922 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:54.922 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:54.922 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:54.922 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:54.922 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:54.922 19:18:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:54.922 19:18:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:54.922 19:18:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:54.922 19:18:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:54.922 19:18:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:54.922 19:18:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:54.922 19:18:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:27:54.922 19:18:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:55.180 19:18:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:55.438 19:18:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:55.438 19:18:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:55.438 19:18:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:55.438 19:18:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:55.438 19:18:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:55.438 19:18:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:55.438 19:18:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:55.438 19:18:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:55.438 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:55.438 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:55.438 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:55.438 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:55.438 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:55.438 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:55.438 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:55.438 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:55.438 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:55.438 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:55.438 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:55.438 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:55.438 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:55.438 ' 00:28:00.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:28:00.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:28:00.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:00.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:28:00.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:28:00.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:28:00.719 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:28:00.719 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:00.719 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:28:00.719 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:28:00.719 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:28:00.719 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:28:00.719 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:28:00.719 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:28:00.719 19:18:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:28:00.719 19:18:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:00.719 19:18:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:00.719 19:18:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1020776 00:28:00.719 19:18:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1020776 ']' 00:28:00.719 19:18:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1020776 00:28:00.719 19:18:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:28:00.719 19:18:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:00.719 19:18:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1020776 00:28:00.719 19:18:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:00.719 19:18:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:00.719 19:18:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1020776' 00:28:00.719 killing process with pid 1020776 00:28:00.719 19:18:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1020776 00:28:00.719 19:18:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1020776 00:28:00.978 19:18:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:28:00.978 19:18:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:28:00.978 19:18:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1020776 ']' 00:28:00.978 19:18:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1020776 00:28:00.978 19:18:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1020776 ']' 00:28:00.978 19:18:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1020776 00:28:00.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1020776) - No such process 00:28:00.978 19:18:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1020776 is not found' 00:28:00.978 Process with pid 1020776 is not found 00:28:00.978 19:18:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:28:00.978 19:18:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:28:00.978 19:18:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:28:00.978 00:28:00.978 real 0m16.785s 00:28:00.978 user 0m35.596s 00:28:00.978 sys 0m0.868s 00:28:00.978 19:18:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:00.978 19:18:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:00.978 ************************************ 00:28:00.978 END TEST spdkcli_nvmf_tcp 00:28:00.978 ************************************ 00:28:00.978 19:18:53 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:00.978 19:18:53 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:00.978 19:18:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:00.978 19:18:53 -- common/autotest_common.sh@10 -- # set +x 00:28:00.978 ************************************ 00:28:00.978 START TEST nvmf_identify_passthru 00:28:00.978 ************************************ 00:28:00.978 19:18:53 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:00.978 * Looking for test storage... 00:28:00.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:00.979 19:18:53 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:00.979 19:18:53 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:00.979 19:18:53 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:00.979 19:18:53 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:00.979 19:18:53 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.979 19:18:53 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.979 19:18:53 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.979 19:18:53 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:28:00.979 19:18:53 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:00.979 19:18:53 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:00.979 19:18:53 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:00.979 19:18:53 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:00.979 19:18:53 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:00.979 19:18:53 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.979 19:18:53 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.979 19:18:53 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.979 19:18:53 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:28:00.979 19:18:53 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.979 19:18:53 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.979 19:18:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:00.979 19:18:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:00.979 19:18:53 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:28:00.979 19:18:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:03.508 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:03.508 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:03.508 Found net devices under 0000:09:00.0: cvl_0_0 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:03.508 Found net devices under 0000:09:00.1: cvl_0_1 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:03.508 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:03.508 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:28:03.508 00:28:03.508 --- 10.0.0.2 ping statistics --- 00:28:03.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.508 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:03.508 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:03.508 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:28:03.508 00:28:03.508 --- 10.0.0.1 ping statistics --- 00:28:03.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.508 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:03.508 19:18:55 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:03.508 19:18:55 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:28:03.508 19:18:55 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:03.508 19:18:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:03.508 19:18:55 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:28:03.508 19:18:55 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:28:03.508 19:18:55 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:28:03.508 19:18:55 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:28:03.508 19:18:55 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:28:03.508 19:18:55 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:28:03.508 19:18:55 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:28:03.508 19:18:55 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:03.508 19:18:55 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:03.508 19:18:55 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:28:03.764 19:18:56 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:28:03.764 19:18:56 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:28:03.764 19:18:56 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:0b:00.0 00:28:03.764 19:18:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:28:03.764 19:18:56 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:28:03.764 19:18:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:28:03.764 19:18:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:28:03.764 19:18:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:28:03.764 EAL: No free 2048 kB hugepages reported on node 1 00:28:07.942 19:19:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:28:07.942 19:19:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:28:07.942 19:19:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:28:07.942 19:19:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:28:07.942 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.123 19:19:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:28:12.123 19:19:04 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:28:12.123 19:19:04 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:12.123 19:19:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:12.123 19:19:04 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:28:12.123 19:19:04 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:12.123 19:19:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:12.123 19:19:04 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1025705 00:28:12.123 19:19:04 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:12.123 19:19:04 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:12.123 19:19:04 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1025705 00:28:12.123 19:19:04 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1025705 ']' 00:28:12.123 19:19:04 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.123 19:19:04 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:12.123 19:19:04 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.123 19:19:04 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:12.123 19:19:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:12.123 [2024-07-25 19:19:04.440510] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:12.123 [2024-07-25 19:19:04.440604] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:12.123 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.123 [2024-07-25 19:19:04.513337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:12.380 [2024-07-25 19:19:04.621597] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.380 [2024-07-25 19:19:04.621658] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.380 [2024-07-25 19:19:04.621678] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:12.380 [2024-07-25 19:19:04.621689] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:12.380 [2024-07-25 19:19:04.621698] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.380 [2024-07-25 19:19:04.621773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.380 [2024-07-25 19:19:04.621839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:12.380 [2024-07-25 19:19:04.621906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:12.380 [2024-07-25 19:19:04.621909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.945 19:19:05 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:12.945 19:19:05 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:28:12.945 19:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:28:12.945 19:19:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.945 19:19:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:12.945 INFO: Log level set to 20 00:28:12.945 INFO: Requests: 00:28:12.945 { 00:28:12.945 "jsonrpc": "2.0", 00:28:12.945 "method": "nvmf_set_config", 00:28:12.945 "id": 1, 00:28:12.945 "params": { 00:28:12.945 "admin_cmd_passthru": { 00:28:12.945 "identify_ctrlr": true 00:28:12.945 } 00:28:12.945 } 00:28:12.945 } 00:28:12.945 00:28:12.945 INFO: response: 00:28:12.945 { 00:28:12.945 "jsonrpc": "2.0", 00:28:12.945 "id": 1, 00:28:12.945 "result": true 00:28:12.945 } 00:28:12.945 00:28:12.945 19:19:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.945 19:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:28:12.945 19:19:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.945 19:19:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:12.945 INFO: Setting log level to 20 00:28:12.945 INFO: Setting log level to 20 00:28:12.945 INFO: Log level set to 20 00:28:12.945 INFO: Log level set to 20 00:28:12.945 INFO: Requests: 00:28:12.945 { 00:28:12.945 "jsonrpc": "2.0", 00:28:12.945 "method": "framework_start_init", 00:28:12.945 "id": 1 00:28:12.945 } 00:28:12.945 00:28:12.945 INFO: Requests: 00:28:12.945 { 00:28:12.945 "jsonrpc": "2.0", 00:28:12.945 "method": "framework_start_init", 00:28:12.945 "id": 1 00:28:12.945 } 00:28:12.945 00:28:13.202 [2024-07-25 19:19:05.503551] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:28:13.202 INFO: response: 00:28:13.202 { 00:28:13.202 "jsonrpc": "2.0", 00:28:13.202 "id": 1, 00:28:13.202 "result": true 00:28:13.202 } 00:28:13.202 00:28:13.202 INFO: response: 00:28:13.202 { 00:28:13.202 "jsonrpc": "2.0", 00:28:13.202 "id": 1, 00:28:13.202 "result": true 00:28:13.202 } 00:28:13.202 00:28:13.202 19:19:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.202 19:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:13.202 19:19:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.202 19:19:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:13.202 INFO: Setting log level to 40 00:28:13.202 INFO: Setting log level to 40 00:28:13.202 INFO: Setting log level to 40 00:28:13.202 [2024-07-25 19:19:05.513654] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:13.202 19:19:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.202 19:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:28:13.202 19:19:05 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:13.202 19:19:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:13.202 19:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:28:13.202 19:19:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.202 19:19:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:16.482 Nvme0n1 00:28:16.482 19:19:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.482 19:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:28:16.482 19:19:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.482 19:19:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:16.482 19:19:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.482 19:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:16.482 19:19:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.482 19:19:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:16.482 19:19:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.482 19:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:16.482 19:19:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.482 19:19:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:16.482 [2024-07-25 19:19:08.412490] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:16.482 19:19:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.482 19:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:28:16.482 19:19:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.482 19:19:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:16.482 [ 00:28:16.482 { 00:28:16.482 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:16.482 "subtype": "Discovery", 00:28:16.482 "listen_addresses": [], 00:28:16.482 "allow_any_host": true, 00:28:16.482 "hosts": [] 00:28:16.482 }, 00:28:16.482 { 00:28:16.482 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:16.482 "subtype": "NVMe", 00:28:16.482 "listen_addresses": [ 00:28:16.483 { 00:28:16.483 "trtype": "TCP", 00:28:16.483 "adrfam": "IPv4", 00:28:16.483 "traddr": "10.0.0.2", 00:28:16.483 "trsvcid": "4420" 00:28:16.483 } 00:28:16.483 ], 00:28:16.483 "allow_any_host": true, 00:28:16.483 "hosts": [], 00:28:16.483 "serial_number": "SPDK00000000000001", 00:28:16.483 "model_number": "SPDK bdev Controller", 00:28:16.483 "max_namespaces": 1, 00:28:16.483 "min_cntlid": 1, 00:28:16.483 "max_cntlid": 65519, 00:28:16.483 "namespaces": [ 00:28:16.483 { 00:28:16.483 "nsid": 1, 00:28:16.483 "bdev_name": "Nvme0n1", 00:28:16.483 "name": "Nvme0n1", 00:28:16.483 "nguid": "4C9467580DDA4E9CA3F67ACCDBDEF0F3", 00:28:16.483 "uuid": "4c946758-0dda-4e9c-a3f6-7accdbdef0f3" 00:28:16.483 } 00:28:16.483 ] 00:28:16.483 } 00:28:16.483 ] 00:28:16.483 19:19:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.483 19:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:16.483 19:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:28:16.483 19:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:28:16.483 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.483 19:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:28:16.483 19:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:16.483 19:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:28:16.483 19:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:28:16.483 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.483 19:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:28:16.483 19:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:28:16.483 19:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:28:16.483 19:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:16.483 19:19:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.483 19:19:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:16.483 19:19:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.483 19:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:28:16.483 19:19:08 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:28:16.483 19:19:08 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:16.483 19:19:08 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:28:16.483 19:19:08 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:16.483 19:19:08 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:28:16.483 19:19:08 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:16.483 19:19:08 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:16.483 rmmod nvme_tcp 00:28:16.483 rmmod nvme_fabrics 00:28:16.483 rmmod nvme_keyring 00:28:16.483 19:19:08 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:16.483 19:19:08 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:28:16.483 19:19:08 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:28:16.483 19:19:08 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1025705 ']' 00:28:16.483 19:19:08 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1025705 00:28:16.483 19:19:08 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1025705 ']' 00:28:16.483 19:19:08 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1025705 00:28:16.483 19:19:08 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:28:16.483 19:19:08 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:16.483 19:19:08 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1025705 00:28:16.483 19:19:08 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:16.483 19:19:08 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:16.483 19:19:08 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1025705' 00:28:16.483 killing process with pid 1025705 00:28:16.483 19:19:08 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1025705 00:28:16.483 19:19:08 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1025705 00:28:17.857 19:19:10 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:17.857 19:19:10 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:17.857 19:19:10 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:17.857 19:19:10 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:17.857 19:19:10 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:17.857 19:19:10 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.857 19:19:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:17.857 19:19:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.394 19:19:12 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:20.394 00:28:20.394 real 0m19.031s 00:28:20.394 user 0m29.267s 00:28:20.394 sys 0m2.653s 00:28:20.394 19:19:12 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:20.394 19:19:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:20.394 ************************************ 00:28:20.394 END TEST nvmf_identify_passthru 00:28:20.394 ************************************ 00:28:20.394 19:19:12 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:20.394 19:19:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:20.394 19:19:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:20.394 19:19:12 -- common/autotest_common.sh@10 -- # set +x 00:28:20.394 ************************************ 00:28:20.394 START TEST nvmf_dif 00:28:20.394 ************************************ 00:28:20.394 19:19:12 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:20.394 * Looking for test storage... 00:28:20.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:20.394 19:19:12 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:20.394 19:19:12 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:20.394 19:19:12 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:20.394 19:19:12 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:20.394 19:19:12 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.394 19:19:12 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.394 19:19:12 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.394 19:19:12 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:28:20.394 19:19:12 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:20.394 19:19:12 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:28:20.394 19:19:12 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:28:20.394 19:19:12 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:28:20.394 19:19:12 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:28:20.394 19:19:12 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.394 19:19:12 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:20.394 19:19:12 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:20.394 19:19:12 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:28:20.394 19:19:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:22.296 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:22.296 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:22.296 Found net devices under 0000:09:00.0: cvl_0_0 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:22.296 Found net devices under 0000:09:00.1: cvl_0_1 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:22.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:28:22.296 00:28:22.296 --- 10.0.0.2 ping statistics --- 00:28:22.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.296 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:28:22.296 19:19:14 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:28:22.554 00:28:22.554 --- 10.0.0.1 ping statistics --- 00:28:22.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.554 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:28:22.554 19:19:14 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.554 19:19:14 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:28:22.554 19:19:14 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:22.554 19:19:14 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:23.929 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:28:23.929 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:28:23.929 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:28:23.929 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:28:23.929 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:28:23.929 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:28:23.929 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:28:23.929 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:28:23.929 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:28:23.929 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:23.929 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:28:23.929 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:28:23.929 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:28:23.929 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:28:23.929 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:28:23.929 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:28:23.929 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:28:23.929 19:19:16 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:23.929 19:19:16 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:23.929 19:19:16 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:23.929 19:19:16 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:23.929 19:19:16 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:23.929 19:19:16 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:23.929 19:19:16 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:28:23.929 19:19:16 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:28:23.929 19:19:16 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:23.929 19:19:16 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:23.929 19:19:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:23.929 19:19:16 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1029482 00:28:23.929 19:19:16 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:23.929 19:19:16 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1029482 00:28:23.929 19:19:16 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1029482 ']' 00:28:23.929 19:19:16 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.929 19:19:16 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:23.929 19:19:16 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.929 19:19:16 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:23.929 19:19:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:23.929 [2024-07-25 19:19:16.392052] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:23.929 [2024-07-25 19:19:16.392142] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.187 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.187 [2024-07-25 19:19:16.465838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.187 [2024-07-25 19:19:16.571981] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:24.187 [2024-07-25 19:19:16.572032] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:24.187 [2024-07-25 19:19:16.572045] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:24.187 [2024-07-25 19:19:16.572055] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:24.187 [2024-07-25 19:19:16.572064] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:24.187 [2024-07-25 19:19:16.572088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.446 19:19:16 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:24.446 19:19:16 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:28:24.446 19:19:16 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:24.446 19:19:16 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:24.446 19:19:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:24.446 19:19:16 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.446 19:19:16 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:28:24.446 19:19:16 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:28:24.446 19:19:16 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.446 19:19:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:24.446 [2024-07-25 19:19:16.716570] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:24.446 19:19:16 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.446 19:19:16 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:28:24.446 19:19:16 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:24.446 19:19:16 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:24.446 19:19:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:24.446 ************************************ 00:28:24.446 START TEST fio_dif_1_default 00:28:24.446 ************************************ 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:24.446 bdev_null0 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:24.446 [2024-07-25 19:19:16.772857] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:24.446 19:19:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:24.447 { 00:28:24.447 "params": { 00:28:24.447 "name": "Nvme$subsystem", 00:28:24.447 "trtype": "$TEST_TRANSPORT", 00:28:24.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.447 "adrfam": "ipv4", 00:28:24.447 "trsvcid": "$NVMF_PORT", 00:28:24.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.447 "hdgst": ${hdgst:-false}, 00:28:24.447 "ddgst": ${ddgst:-false} 00:28:24.447 }, 00:28:24.447 "method": "bdev_nvme_attach_controller" 00:28:24.447 } 00:28:24.447 EOF 00:28:24.447 )") 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:24.447 "params": { 00:28:24.447 "name": "Nvme0", 00:28:24.447 "trtype": "tcp", 00:28:24.447 "traddr": "10.0.0.2", 00:28:24.447 "adrfam": "ipv4", 00:28:24.447 "trsvcid": "4420", 00:28:24.447 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:24.447 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:24.447 "hdgst": false, 00:28:24.447 "ddgst": false 00:28:24.447 }, 00:28:24.447 "method": "bdev_nvme_attach_controller" 00:28:24.447 }' 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:24.447 19:19:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:24.705 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:24.705 fio-3.35 00:28:24.705 Starting 1 thread 00:28:24.705 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.932 00:28:36.932 filename0: (groupid=0, jobs=1): err= 0: pid=1029707: Thu Jul 25 19:19:27 2024 00:28:36.932 read: IOPS=189, BW=758KiB/s (776kB/s)(7600KiB/10029msec) 00:28:36.932 slat (nsec): min=6437, max=82616, avg=8706.49, stdev=3389.40 00:28:36.932 clat (usec): min=816, max=45409, avg=21085.74, stdev=20141.04 00:28:36.932 lat (usec): min=824, max=45443, avg=21094.45, stdev=20140.73 00:28:36.932 clat percentiles (usec): 00:28:36.932 | 1.00th=[ 873], 5.00th=[ 889], 10.00th=[ 898], 20.00th=[ 906], 00:28:36.932 | 30.00th=[ 914], 40.00th=[ 922], 50.00th=[41157], 60.00th=[41157], 00:28:36.932 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:28:36.932 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:28:36.932 | 99.99th=[45351] 00:28:36.932 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=758.40, stdev=23.45, samples=20 00:28:36.932 iops : min= 176, max= 192, avg=189.60, stdev= 5.86, samples=20 00:28:36.932 lat (usec) : 1000=49.84% 00:28:36.932 lat (msec) : 2=0.05%, 50=50.11% 00:28:36.932 cpu : usr=89.71%, sys=10.01%, ctx=17, majf=0, minf=292 00:28:36.932 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:36.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:36.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:36.932 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:36.932 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:36.932 00:28:36.932 Run status group 0 (all jobs): 00:28:36.932 READ: bw=758KiB/s (776kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=7600KiB (7782kB), run=10029-10029msec 00:28:36.932 19:19:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:28:36.932 19:19:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:28:36.932 19:19:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:28:36.932 19:19:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:36.932 19:19:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:28:36.932 19:19:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:36.932 19:19:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.932 19:19:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:36.932 19:19:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.932 19:19:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:36.932 19:19:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.932 19:19:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:36.932 19:19:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.932 00:28:36.932 real 0m11.251s 00:28:36.932 user 0m10.273s 00:28:36.932 sys 0m1.279s 00:28:36.932 19:19:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:36.932 19:19:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:36.932 ************************************ 00:28:36.932 END TEST fio_dif_1_default 00:28:36.932 ************************************ 00:28:36.932 19:19:28 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:28:36.932 19:19:28 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:36.932 19:19:28 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:36.932 19:19:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:36.932 ************************************ 00:28:36.932 START TEST fio_dif_1_multi_subsystems 00:28:36.932 ************************************ 00:28:36.932 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:28:36.932 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:28:36.932 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:28:36.932 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:28:36.932 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:36.932 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:28:36.932 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:28:36.932 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:36.932 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.932 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.932 bdev_null0 00:28:36.932 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.932 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:36.932 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.932 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.932 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.932 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:36.932 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.932 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.932 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.933 [2024-07-25 19:19:28.076365] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.933 bdev_null1 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.933 { 00:28:36.933 "params": { 00:28:36.933 "name": "Nvme$subsystem", 00:28:36.933 "trtype": "$TEST_TRANSPORT", 00:28:36.933 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.933 "adrfam": "ipv4", 00:28:36.933 "trsvcid": "$NVMF_PORT", 00:28:36.933 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.933 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.933 "hdgst": ${hdgst:-false}, 00:28:36.933 "ddgst": ${ddgst:-false} 00:28:36.933 }, 00:28:36.933 "method": "bdev_nvme_attach_controller" 00:28:36.933 } 00:28:36.933 EOF 00:28:36.933 )") 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.933 { 00:28:36.933 "params": { 00:28:36.933 "name": "Nvme$subsystem", 00:28:36.933 "trtype": "$TEST_TRANSPORT", 00:28:36.933 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.933 "adrfam": "ipv4", 00:28:36.933 "trsvcid": "$NVMF_PORT", 00:28:36.933 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.933 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.933 "hdgst": ${hdgst:-false}, 00:28:36.933 "ddgst": ${ddgst:-false} 00:28:36.933 }, 00:28:36.933 "method": "bdev_nvme_attach_controller" 00:28:36.933 } 00:28:36.933 EOF 00:28:36.933 )") 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:36.933 "params": { 00:28:36.933 "name": "Nvme0", 00:28:36.933 "trtype": "tcp", 00:28:36.933 "traddr": "10.0.0.2", 00:28:36.933 "adrfam": "ipv4", 00:28:36.933 "trsvcid": "4420", 00:28:36.933 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:36.933 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:36.933 "hdgst": false, 00:28:36.933 "ddgst": false 00:28:36.933 }, 00:28:36.933 "method": "bdev_nvme_attach_controller" 00:28:36.933 },{ 00:28:36.933 "params": { 00:28:36.933 "name": "Nvme1", 00:28:36.933 "trtype": "tcp", 00:28:36.933 "traddr": "10.0.0.2", 00:28:36.933 "adrfam": "ipv4", 00:28:36.933 "trsvcid": "4420", 00:28:36.933 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:36.933 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:36.933 "hdgst": false, 00:28:36.933 "ddgst": false 00:28:36.933 }, 00:28:36.933 "method": "bdev_nvme_attach_controller" 00:28:36.933 }' 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:36.933 19:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:36.933 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:36.933 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:36.933 fio-3.35 00:28:36.933 Starting 2 threads 00:28:36.933 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.898 00:28:46.898 filename0: (groupid=0, jobs=1): err= 0: pid=1031111: Thu Jul 25 19:19:39 2024 00:28:46.898 read: IOPS=185, BW=743KiB/s (761kB/s)(7456KiB/10037msec) 00:28:46.898 slat (nsec): min=7112, max=35704, avg=10641.98, stdev=4962.92 00:28:46.898 clat (usec): min=886, max=48168, avg=21504.58, stdev=20384.99 00:28:46.898 lat (usec): min=893, max=48181, avg=21515.23, stdev=20384.66 00:28:46.898 clat percentiles (usec): 00:28:46.898 | 1.00th=[ 914], 5.00th=[ 938], 10.00th=[ 963], 20.00th=[ 996], 00:28:46.898 | 30.00th=[ 1012], 40.00th=[ 1057], 50.00th=[41157], 60.00th=[41681], 00:28:46.898 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:28:46.898 | 99.00th=[42206], 99.50th=[42206], 99.90th=[47973], 99.95th=[47973], 00:28:46.898 | 99.99th=[47973] 00:28:46.898 bw ( KiB/s): min= 672, max= 768, per=49.86%, avg=744.00, stdev=32.63, samples=20 00:28:46.898 iops : min= 168, max= 192, avg=186.00, stdev= 8.16, samples=20 00:28:46.898 lat (usec) : 1000=21.94% 00:28:46.898 lat (msec) : 2=27.84%, 50=50.21% 00:28:46.898 cpu : usr=93.98%, sys=5.71%, ctx=22, majf=0, minf=71 00:28:46.898 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:46.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:46.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:46.898 issued rwts: total=1864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:46.898 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:46.898 filename1: (groupid=0, jobs=1): err= 0: pid=1031112: Thu Jul 25 19:19:39 2024 00:28:46.898 read: IOPS=187, BW=749KiB/s (767kB/s)(7520KiB/10037msec) 00:28:46.898 slat (nsec): min=7094, max=40349, avg=10613.58, stdev=4874.05 00:28:46.899 clat (usec): min=836, max=47181, avg=21321.67, stdev=20216.02 00:28:46.899 lat (usec): min=844, max=47197, avg=21332.28, stdev=20215.17 00:28:46.899 clat percentiles (usec): 00:28:46.899 | 1.00th=[ 857], 5.00th=[ 889], 10.00th=[ 906], 20.00th=[ 922], 00:28:46.899 | 30.00th=[ 947], 40.00th=[ 1004], 50.00th=[40633], 60.00th=[41157], 00:28:46.899 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:28:46.899 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:28:46.899 | 99.99th=[46924] 00:28:46.899 bw ( KiB/s): min= 672, max= 768, per=50.27%, avg=750.40, stdev=31.96, samples=20 00:28:46.899 iops : min= 168, max= 192, avg=187.60, stdev= 7.99, samples=20 00:28:46.899 lat (usec) : 1000=39.73% 00:28:46.899 lat (msec) : 2=10.05%, 50=50.21% 00:28:46.899 cpu : usr=94.16%, sys=5.53%, ctx=16, majf=0, minf=190 00:28:46.899 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:46.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:46.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:46.899 issued rwts: total=1880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:46.899 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:46.899 00:28:46.899 Run status group 0 (all jobs): 00:28:46.899 READ: bw=1492KiB/s (1528kB/s), 743KiB/s-749KiB/s (761kB/s-767kB/s), io=14.6MiB (15.3MB), run=10037-10037msec 00:28:47.157 19:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:47.157 19:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:28:47.157 19:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:47.157 19:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:47.157 19:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:28:47.157 19:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:47.157 19:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.157 19:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:47.157 19:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.157 19:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:47.157 19:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.157 19:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:47.157 19:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.157 19:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:47.157 19:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:47.157 19:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:28:47.157 19:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:47.157 19:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.157 19:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:47.157 19:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.157 19:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:47.158 19:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.158 19:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:47.417 19:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.417 00:28:47.417 real 0m11.587s 00:28:47.417 user 0m20.335s 00:28:47.417 sys 0m1.465s 00:28:47.417 19:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:47.417 19:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:47.417 ************************************ 00:28:47.417 END TEST fio_dif_1_multi_subsystems 00:28:47.417 ************************************ 00:28:47.417 19:19:39 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:47.417 19:19:39 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:47.417 19:19:39 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:47.417 19:19:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:47.417 ************************************ 00:28:47.417 START TEST fio_dif_rand_params 00:28:47.417 ************************************ 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:47.417 bdev_null0 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:47.417 [2024-07-25 19:19:39.706862] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:47.417 { 00:28:47.417 "params": { 00:28:47.417 "name": "Nvme$subsystem", 00:28:47.417 "trtype": "$TEST_TRANSPORT", 00:28:47.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.417 "adrfam": "ipv4", 00:28:47.417 "trsvcid": "$NVMF_PORT", 00:28:47.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.417 "hdgst": ${hdgst:-false}, 00:28:47.417 "ddgst": ${ddgst:-false} 00:28:47.417 }, 00:28:47.417 "method": "bdev_nvme_attach_controller" 00:28:47.417 } 00:28:47.417 EOF 00:28:47.417 )") 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:47.417 "params": { 00:28:47.417 "name": "Nvme0", 00:28:47.417 "trtype": "tcp", 00:28:47.417 "traddr": "10.0.0.2", 00:28:47.417 "adrfam": "ipv4", 00:28:47.417 "trsvcid": "4420", 00:28:47.417 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:47.417 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:47.417 "hdgst": false, 00:28:47.417 "ddgst": false 00:28:47.417 }, 00:28:47.417 "method": "bdev_nvme_attach_controller" 00:28:47.417 }' 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:47.417 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:47.418 19:19:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:47.676 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:47.676 ... 00:28:47.676 fio-3.35 00:28:47.676 Starting 3 threads 00:28:47.676 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.237 00:28:54.237 filename0: (groupid=0, jobs=1): err= 0: pid=1032517: Thu Jul 25 19:19:45 2024 00:28:54.237 read: IOPS=153, BW=19.2MiB/s (20.2MB/s)(97.0MiB/5046msec) 00:28:54.237 slat (nsec): min=5197, max=49578, avg=15716.17, stdev=5408.10 00:28:54.237 clat (usec): min=7201, max=91741, avg=19429.36, stdev=14767.32 00:28:54.237 lat (usec): min=7215, max=91761, avg=19445.07, stdev=14767.47 00:28:54.237 clat percentiles (usec): 00:28:54.237 | 1.00th=[ 7439], 5.00th=[ 8029], 10.00th=[ 8291], 20.00th=[ 9241], 00:28:54.237 | 30.00th=[10421], 40.00th=[12649], 50.00th=[16057], 60.00th=[17695], 00:28:54.237 | 70.00th=[19268], 80.00th=[20841], 90.00th=[51643], 95.00th=[56886], 00:28:54.237 | 99.00th=[61604], 99.50th=[63701], 99.90th=[91751], 99.95th=[91751], 00:28:54.237 | 99.99th=[91751] 00:28:54.237 bw ( KiB/s): min=13312, max=30720, per=35.63%, avg=19814.40, stdev=5016.85, samples=10 00:28:54.237 iops : min= 104, max= 240, avg=154.80, stdev=39.19, samples=10 00:28:54.237 lat (msec) : 10=26.29%, 20=47.42%, 50=14.56%, 100=11.73% 00:28:54.237 cpu : usr=93.18%, sys=6.18%, ctx=115, majf=0, minf=85 00:28:54.237 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:54.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:54.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:54.237 issued rwts: total=776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:54.237 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:54.237 filename0: (groupid=0, jobs=1): err= 0: pid=1032518: Thu Jul 25 19:19:45 2024 00:28:54.237 read: IOPS=125, BW=15.7MiB/s (16.5MB/s)(79.2MiB/5047msec) 00:28:54.237 slat (nsec): min=4903, max=47670, avg=14625.54, stdev=5193.19 00:28:54.237 clat (usec): min=8378, max=60174, avg=23793.90, stdev=17361.76 00:28:54.237 lat (usec): min=8386, max=60187, avg=23808.52, stdev=17362.10 00:28:54.237 clat percentiles (usec): 00:28:54.237 | 1.00th=[ 8848], 5.00th=[10159], 10.00th=[11600], 20.00th=[12911], 00:28:54.237 | 30.00th=[13698], 40.00th=[14353], 50.00th=[15008], 60.00th=[15795], 00:28:54.237 | 70.00th=[17433], 80.00th=[52167], 90.00th=[54789], 95.00th=[55837], 00:28:54.237 | 99.00th=[58983], 99.50th=[60031], 99.90th=[60031], 99.95th=[60031], 00:28:54.237 | 99.99th=[60031] 00:28:54.237 bw ( KiB/s): min=12544, max=19712, per=29.04%, avg=16153.60, stdev=2535.56, samples=10 00:28:54.237 iops : min= 98, max= 154, avg=126.20, stdev=19.81, samples=10 00:28:54.237 lat (msec) : 10=4.10%, 20=71.92%, 50=0.47%, 100=23.50% 00:28:54.237 cpu : usr=93.42%, sys=6.10%, ctx=35, majf=0, minf=108 00:28:54.237 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:54.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:54.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:54.237 issued rwts: total=634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:54.237 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:54.237 filename0: (groupid=0, jobs=1): err= 0: pid=1032519: Thu Jul 25 19:19:45 2024 00:28:54.237 read: IOPS=156, BW=19.5MiB/s (20.5MB/s)(97.9MiB/5007msec) 00:28:54.237 slat (nsec): min=5397, max=67187, avg=13525.68, stdev=4511.78 00:28:54.237 clat (usec): min=5260, max=96438, avg=19158.43, stdev=16986.72 00:28:54.237 lat (usec): min=5274, max=96451, avg=19171.96, stdev=16986.92 00:28:54.237 clat percentiles (usec): 00:28:54.237 | 1.00th=[ 5932], 5.00th=[ 6259], 10.00th=[ 6521], 20.00th=[ 6915], 00:28:54.237 | 30.00th=[ 8160], 40.00th=[ 9110], 50.00th=[11207], 60.00th=[17433], 00:28:54.237 | 70.00th=[20841], 80.00th=[25297], 90.00th=[51643], 95.00th=[56886], 00:28:54.237 | 99.00th=[68682], 99.50th=[92799], 99.90th=[95945], 99.95th=[95945], 00:28:54.237 | 99.99th=[95945] 00:28:54.237 bw ( KiB/s): min=11776, max=30208, per=35.91%, avg=19970.70, stdev=6631.19, samples=10 00:28:54.237 iops : min= 92, max= 236, avg=156.00, stdev=51.83, samples=10 00:28:54.237 lat (msec) : 10=47.00%, 20=19.41%, 50=21.20%, 100=12.39% 00:28:54.237 cpu : usr=93.01%, sys=6.53%, ctx=15, majf=0, minf=149 00:28:54.237 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:54.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:54.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:54.237 issued rwts: total=783,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:54.237 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:54.237 00:28:54.237 Run status group 0 (all jobs): 00:28:54.237 READ: bw=54.3MiB/s (57.0MB/s), 15.7MiB/s-19.5MiB/s (16.5MB/s-20.5MB/s), io=274MiB (287MB), run=5007-5047msec 00:28:54.237 19:19:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:54.237 19:19:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:54.237 19:19:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:54.237 19:19:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:54.237 19:19:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:54.237 19:19:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:54.237 19:19:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.237 19:19:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:54.237 bdev_null0 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:54.237 [2024-07-25 19:19:46.042922] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:54.237 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:54.238 bdev_null1 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:54.238 bdev_null2 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:54.238 { 00:28:54.238 "params": { 00:28:54.238 "name": "Nvme$subsystem", 00:28:54.238 "trtype": "$TEST_TRANSPORT", 00:28:54.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.238 "adrfam": "ipv4", 00:28:54.238 "trsvcid": "$NVMF_PORT", 00:28:54.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.238 "hdgst": ${hdgst:-false}, 00:28:54.238 "ddgst": ${ddgst:-false} 00:28:54.238 }, 00:28:54.238 "method": "bdev_nvme_attach_controller" 00:28:54.238 } 00:28:54.238 EOF 00:28:54.238 )") 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:54.238 { 00:28:54.238 "params": { 00:28:54.238 "name": "Nvme$subsystem", 00:28:54.238 "trtype": "$TEST_TRANSPORT", 00:28:54.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.238 "adrfam": "ipv4", 00:28:54.238 "trsvcid": "$NVMF_PORT", 00:28:54.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.238 "hdgst": ${hdgst:-false}, 00:28:54.238 "ddgst": ${ddgst:-false} 00:28:54.238 }, 00:28:54.238 "method": "bdev_nvme_attach_controller" 00:28:54.238 } 00:28:54.238 EOF 00:28:54.238 )") 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:54.238 { 00:28:54.238 "params": { 00:28:54.238 "name": "Nvme$subsystem", 00:28:54.238 "trtype": "$TEST_TRANSPORT", 00:28:54.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.238 "adrfam": "ipv4", 00:28:54.238 "trsvcid": "$NVMF_PORT", 00:28:54.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.238 "hdgst": ${hdgst:-false}, 00:28:54.238 "ddgst": ${ddgst:-false} 00:28:54.238 }, 00:28:54.238 "method": "bdev_nvme_attach_controller" 00:28:54.238 } 00:28:54.238 EOF 00:28:54.238 )") 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:54.238 19:19:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:54.238 "params": { 00:28:54.238 "name": "Nvme0", 00:28:54.238 "trtype": "tcp", 00:28:54.238 "traddr": "10.0.0.2", 00:28:54.238 "adrfam": "ipv4", 00:28:54.238 "trsvcid": "4420", 00:28:54.238 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:54.238 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:54.238 "hdgst": false, 00:28:54.238 "ddgst": false 00:28:54.238 }, 00:28:54.238 "method": "bdev_nvme_attach_controller" 00:28:54.238 },{ 00:28:54.238 "params": { 00:28:54.238 "name": "Nvme1", 00:28:54.238 "trtype": "tcp", 00:28:54.238 "traddr": "10.0.0.2", 00:28:54.238 "adrfam": "ipv4", 00:28:54.238 "trsvcid": "4420", 00:28:54.238 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:54.239 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:54.239 "hdgst": false, 00:28:54.239 "ddgst": false 00:28:54.239 }, 00:28:54.239 "method": "bdev_nvme_attach_controller" 00:28:54.239 },{ 00:28:54.239 "params": { 00:28:54.239 "name": "Nvme2", 00:28:54.239 "trtype": "tcp", 00:28:54.239 "traddr": "10.0.0.2", 00:28:54.239 "adrfam": "ipv4", 00:28:54.239 "trsvcid": "4420", 00:28:54.239 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:54.239 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:54.239 "hdgst": false, 00:28:54.239 "ddgst": false 00:28:54.239 }, 00:28:54.239 "method": "bdev_nvme_attach_controller" 00:28:54.239 }' 00:28:54.239 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:54.239 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:54.239 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:54.239 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:54.239 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:54.239 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:54.239 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:54.239 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:54.239 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:54.239 19:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:54.239 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:54.239 ... 00:28:54.239 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:54.239 ... 00:28:54.239 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:54.239 ... 00:28:54.239 fio-3.35 00:28:54.239 Starting 24 threads 00:28:54.239 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.444 00:29:06.444 filename0: (groupid=0, jobs=1): err= 0: pid=1033376: Thu Jul 25 19:19:57 2024 00:29:06.444 read: IOPS=66, BW=265KiB/s (272kB/s)(2688KiB/10134msec) 00:29:06.444 slat (usec): min=11, max=105, avg=70.98, stdev=14.71 00:29:06.444 clat (msec): min=88, max=366, avg=240.70, stdev=45.04 00:29:06.444 lat (msec): min=88, max=366, avg=240.77, stdev=45.05 00:29:06.444 clat percentiles (msec): 00:29:06.444 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 192], 00:29:06.444 | 30.00th=[ 241], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 255], 00:29:06.444 | 70.00th=[ 257], 80.00th=[ 264], 90.00th=[ 284], 95.00th=[ 296], 00:29:06.444 | 99.00th=[ 347], 99.50th=[ 351], 99.90th=[ 368], 99.95th=[ 368], 00:29:06.444 | 99.99th=[ 368] 00:29:06.444 bw ( KiB/s): min= 128, max= 384, per=4.01%, avg=262.40, stdev=63.87, samples=20 00:29:06.444 iops : min= 32, max= 96, avg=65.60, stdev=15.97, samples=20 00:29:06.444 lat (msec) : 100=0.60%, 250=39.88%, 500=59.52% 00:29:06.444 cpu : usr=98.12%, sys=1.31%, ctx=39, majf=0, minf=58 00:29:06.444 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:29:06.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.444 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.444 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.444 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.444 filename0: (groupid=0, jobs=1): err= 0: pid=1033377: Thu Jul 25 19:19:57 2024 00:29:06.444 read: IOPS=66, BW=265KiB/s (271kB/s)(2680KiB/10121msec) 00:29:06.444 slat (usec): min=5, max=155, avg=40.41, stdev=21.13 00:29:06.444 clat (msec): min=126, max=368, avg=240.99, stdev=41.20 00:29:06.444 lat (msec): min=126, max=368, avg=241.03, stdev=41.20 00:29:06.444 clat percentiles (msec): 00:29:06.444 | 1.00th=[ 140], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 203], 00:29:06.444 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 257], 00:29:06.444 | 70.00th=[ 259], 80.00th=[ 266], 90.00th=[ 279], 95.00th=[ 279], 00:29:06.444 | 99.00th=[ 351], 99.50th=[ 368], 99.90th=[ 368], 99.95th=[ 368], 00:29:06.444 | 99.99th=[ 368] 00:29:06.444 bw ( KiB/s): min= 256, max= 384, per=4.01%, avg=262.40, stdev=28.62, samples=20 00:29:06.444 iops : min= 64, max= 96, avg=65.60, stdev= 7.16, samples=20 00:29:06.444 lat (msec) : 250=40.00%, 500=60.00% 00:29:06.444 cpu : usr=97.79%, sys=1.52%, ctx=76, majf=0, minf=28 00:29:06.444 IO depths : 1=3.3%, 2=9.6%, 4=25.1%, 8=53.0%, 16=9.1%, 32=0.0%, >=64=0.0% 00:29:06.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.444 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.444 issued rwts: total=670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.444 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.444 filename0: (groupid=0, jobs=1): err= 0: pid=1033378: Thu Jul 25 19:19:57 2024 00:29:06.444 read: IOPS=64, BW=260KiB/s (266kB/s)(2624KiB/10110msec) 00:29:06.444 slat (nsec): min=8360, max=96636, avg=30688.82, stdev=20967.71 00:29:06.444 clat (msec): min=168, max=298, avg=246.28, stdev=33.74 00:29:06.444 lat (msec): min=168, max=298, avg=246.31, stdev=33.73 00:29:06.444 clat percentiles (msec): 00:29:06.444 | 1.00th=[ 169], 5.00th=[ 169], 10.00th=[ 169], 20.00th=[ 241], 00:29:06.444 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 257], 00:29:06.444 | 70.00th=[ 257], 80.00th=[ 264], 90.00th=[ 279], 95.00th=[ 288], 00:29:06.444 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 300], 99.95th=[ 300], 00:29:06.444 | 99.99th=[ 300] 00:29:06.444 bw ( KiB/s): min= 128, max= 384, per=3.92%, avg=256.00, stdev=60.34, samples=19 00:29:06.444 iops : min= 32, max= 96, avg=64.00, stdev=15.08, samples=19 00:29:06.444 lat (msec) : 250=34.15%, 500=65.85% 00:29:06.444 cpu : usr=97.59%, sys=1.58%, ctx=14, majf=0, minf=33 00:29:06.444 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:06.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.444 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.444 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.444 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.444 filename0: (groupid=0, jobs=1): err= 0: pid=1033379: Thu Jul 25 19:19:57 2024 00:29:06.444 read: IOPS=75, BW=303KiB/s (310kB/s)(3072KiB/10153msec) 00:29:06.444 slat (usec): min=8, max=126, avg=22.25, stdev=13.10 00:29:06.444 clat (msec): min=61, max=365, avg=209.35, stdev=52.85 00:29:06.444 lat (msec): min=61, max=365, avg=209.37, stdev=52.85 00:29:06.444 clat percentiles (msec): 00:29:06.444 | 1.00th=[ 62], 5.00th=[ 120], 10.00th=[ 134], 20.00th=[ 167], 00:29:06.444 | 30.00th=[ 171], 40.00th=[ 201], 50.00th=[ 215], 60.00th=[ 249], 00:29:06.444 | 70.00th=[ 253], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 268], 00:29:06.444 | 99.00th=[ 279], 99.50th=[ 342], 99.90th=[ 368], 99.95th=[ 368], 00:29:06.444 | 99.99th=[ 368] 00:29:06.444 bw ( KiB/s): min= 240, max= 512, per=4.59%, avg=300.80, stdev=74.07, samples=20 00:29:06.444 iops : min= 60, max= 128, avg=75.20, stdev=18.52, samples=20 00:29:06.444 lat (msec) : 100=4.17%, 250=62.76%, 500=33.07% 00:29:06.444 cpu : usr=97.70%, sys=1.61%, ctx=46, majf=0, minf=33 00:29:06.444 IO depths : 1=3.8%, 2=10.0%, 4=25.0%, 8=52.5%, 16=8.7%, 32=0.0%, >=64=0.0% 00:29:06.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.444 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.444 issued rwts: total=768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.444 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.444 filename0: (groupid=0, jobs=1): err= 0: pid=1033380: Thu Jul 25 19:19:57 2024 00:29:06.444 read: IOPS=78, BW=312KiB/s (320kB/s)(3136KiB/10045msec) 00:29:06.444 slat (nsec): min=8261, max=89115, avg=22401.02, stdev=15279.73 00:29:06.444 clat (msec): min=62, max=382, avg=204.81, stdev=54.15 00:29:06.444 lat (msec): min=62, max=382, avg=204.84, stdev=54.14 00:29:06.444 clat percentiles (msec): 00:29:06.444 | 1.00th=[ 63], 5.00th=[ 120], 10.00th=[ 131], 20.00th=[ 165], 00:29:06.444 | 30.00th=[ 169], 40.00th=[ 194], 50.00th=[ 211], 60.00th=[ 243], 00:29:06.444 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 259], 95.00th=[ 268], 00:29:06.444 | 99.00th=[ 300], 99.50th=[ 368], 99.90th=[ 384], 99.95th=[ 384], 00:29:06.444 | 99.99th=[ 384] 00:29:06.445 bw ( KiB/s): min= 240, max= 513, per=4.70%, avg=307.25, stdev=75.65, samples=20 00:29:06.445 iops : min= 60, max= 128, avg=76.80, stdev=18.88, samples=20 00:29:06.445 lat (msec) : 100=4.08%, 250=67.86%, 500=28.06% 00:29:06.445 cpu : usr=97.84%, sys=1.66%, ctx=20, majf=0, minf=62 00:29:06.445 IO depths : 1=3.8%, 2=9.7%, 4=24.6%, 8=53.2%, 16=8.7%, 32=0.0%, >=64=0.0% 00:29:06.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.445 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.445 issued rwts: total=784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.445 filename0: (groupid=0, jobs=1): err= 0: pid=1033381: Thu Jul 25 19:19:57 2024 00:29:06.445 read: IOPS=66, BW=265KiB/s (272kB/s)(2688KiB/10128msec) 00:29:06.445 slat (usec): min=14, max=178, avg=35.67, stdev=16.70 00:29:06.445 clat (msec): min=164, max=283, avg=240.80, stdev=35.10 00:29:06.445 lat (msec): min=164, max=283, avg=240.84, stdev=35.10 00:29:06.445 clat percentiles (msec): 00:29:06.445 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 169], 20.00th=[ 224], 00:29:06.445 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 257], 00:29:06.445 | 70.00th=[ 259], 80.00th=[ 264], 90.00th=[ 279], 95.00th=[ 279], 00:29:06.445 | 99.00th=[ 284], 99.50th=[ 284], 99.90th=[ 284], 99.95th=[ 284], 00:29:06.445 | 99.99th=[ 284] 00:29:06.445 bw ( KiB/s): min= 128, max= 384, per=4.01%, avg=262.40, stdev=50.44, samples=20 00:29:06.445 iops : min= 32, max= 96, avg=65.60, stdev=12.61, samples=20 00:29:06.445 lat (msec) : 250=40.48%, 500=59.52% 00:29:06.445 cpu : usr=97.19%, sys=1.80%, ctx=32, majf=0, minf=51 00:29:06.445 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:06.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.445 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.445 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.445 filename0: (groupid=0, jobs=1): err= 0: pid=1033382: Thu Jul 25 19:19:57 2024 00:29:06.445 read: IOPS=66, BW=265KiB/s (271kB/s)(2688KiB/10143msec) 00:29:06.445 slat (nsec): min=14292, max=99706, avg=46131.47, stdev=18579.45 00:29:06.445 clat (msec): min=164, max=314, avg=241.09, stdev=35.20 00:29:06.445 lat (msec): min=164, max=314, avg=241.14, stdev=35.20 00:29:06.445 clat percentiles (msec): 00:29:06.445 | 1.00th=[ 165], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 228], 00:29:06.445 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 257], 00:29:06.445 | 70.00th=[ 259], 80.00th=[ 264], 90.00th=[ 279], 95.00th=[ 279], 00:29:06.445 | 99.00th=[ 284], 99.50th=[ 284], 99.90th=[ 313], 99.95th=[ 313], 00:29:06.445 | 99.99th=[ 313] 00:29:06.445 bw ( KiB/s): min= 128, max= 384, per=4.01%, avg=262.40, stdev=50.44, samples=20 00:29:06.445 iops : min= 32, max= 96, avg=65.60, stdev=12.61, samples=20 00:29:06.445 lat (msec) : 250=41.96%, 500=58.04% 00:29:06.445 cpu : usr=97.97%, sys=1.60%, ctx=9, majf=0, minf=49 00:29:06.445 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:29:06.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.445 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.445 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.445 filename0: (groupid=0, jobs=1): err= 0: pid=1033383: Thu Jul 25 19:19:57 2024 00:29:06.445 read: IOPS=70, BW=283KiB/s (290kB/s)(2880KiB/10159msec) 00:29:06.445 slat (usec): min=8, max=244, avg=35.69, stdev=29.88 00:29:06.445 clat (msec): min=57, max=374, avg=225.43, stdev=55.49 00:29:06.445 lat (msec): min=57, max=374, avg=225.47, stdev=55.49 00:29:06.445 clat percentiles (msec): 00:29:06.445 | 1.00th=[ 58], 5.00th=[ 126], 10.00th=[ 165], 20.00th=[ 169], 00:29:06.445 | 30.00th=[ 201], 40.00th=[ 230], 50.00th=[ 251], 60.00th=[ 255], 00:29:06.445 | 70.00th=[ 259], 80.00th=[ 266], 90.00th=[ 279], 95.00th=[ 284], 00:29:06.445 | 99.00th=[ 321], 99.50th=[ 334], 99.90th=[ 376], 99.95th=[ 376], 00:29:06.445 | 99.99th=[ 376] 00:29:06.445 bw ( KiB/s): min= 144, max= 510, per=4.30%, avg=281.50, stdev=75.87, samples=20 00:29:06.445 iops : min= 36, max= 127, avg=70.35, stdev=18.89, samples=20 00:29:06.445 lat (msec) : 100=4.17%, 250=45.97%, 500=49.86% 00:29:06.445 cpu : usr=96.34%, sys=2.30%, ctx=44, majf=0, minf=38 00:29:06.445 IO depths : 1=3.3%, 2=9.4%, 4=24.6%, 8=53.5%, 16=9.2%, 32=0.0%, >=64=0.0% 00:29:06.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.445 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.445 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.445 filename1: (groupid=0, jobs=1): err= 0: pid=1033384: Thu Jul 25 19:19:57 2024 00:29:06.445 read: IOPS=69, BW=276KiB/s (283kB/s)(2808KiB/10158msec) 00:29:06.445 slat (nsec): min=4587, max=98589, avg=37614.92, stdev=17326.08 00:29:06.445 clat (msec): min=58, max=365, avg=230.86, stdev=52.31 00:29:06.445 lat (msec): min=58, max=365, avg=230.90, stdev=52.31 00:29:06.445 clat percentiles (msec): 00:29:06.445 | 1.00th=[ 59], 5.00th=[ 120], 10.00th=[ 167], 20.00th=[ 171], 00:29:06.445 | 30.00th=[ 228], 40.00th=[ 247], 50.00th=[ 255], 60.00th=[ 255], 00:29:06.445 | 70.00th=[ 259], 80.00th=[ 266], 90.00th=[ 279], 95.00th=[ 279], 00:29:06.445 | 99.00th=[ 326], 99.50th=[ 334], 99.90th=[ 368], 99.95th=[ 368], 00:29:06.445 | 99.99th=[ 368] 00:29:06.445 bw ( KiB/s): min= 240, max= 512, per=4.19%, avg=274.40, stdev=63.00, samples=20 00:29:06.445 iops : min= 60, max= 128, avg=68.60, stdev=15.75, samples=20 00:29:06.445 lat (msec) : 100=3.42%, 250=41.31%, 500=55.27% 00:29:06.445 cpu : usr=98.29%, sys=1.27%, ctx=32, majf=0, minf=40 00:29:06.445 IO depths : 1=3.8%, 2=9.7%, 4=24.6%, 8=53.3%, 16=8.5%, 32=0.0%, >=64=0.0% 00:29:06.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.445 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.445 issued rwts: total=702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.445 filename1: (groupid=0, jobs=1): err= 0: pid=1033385: Thu Jul 25 19:19:57 2024 00:29:06.445 read: IOPS=65, BW=262KiB/s (268kB/s)(2624KiB/10008msec) 00:29:06.445 slat (nsec): min=6557, max=76563, avg=27381.13, stdev=11495.91 00:29:06.445 clat (msec): min=167, max=280, avg=243.84, stdev=32.46 00:29:06.445 lat (msec): min=167, max=280, avg=243.86, stdev=32.45 00:29:06.445 clat percentiles (msec): 00:29:06.445 | 1.00th=[ 167], 5.00th=[ 169], 10.00th=[ 171], 20.00th=[ 228], 00:29:06.445 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 255], 00:29:06.445 | 70.00th=[ 259], 80.00th=[ 268], 90.00th=[ 275], 95.00th=[ 279], 00:29:06.445 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 279], 99.95th=[ 279], 00:29:06.445 | 99.99th=[ 279] 00:29:06.445 bw ( KiB/s): min= 128, max= 384, per=3.90%, avg=256.00, stdev=41.53, samples=20 00:29:06.445 iops : min= 32, max= 96, avg=64.00, stdev=10.38, samples=20 00:29:06.445 lat (msec) : 250=31.71%, 500=68.29% 00:29:06.445 cpu : usr=98.17%, sys=1.40%, ctx=24, majf=0, minf=36 00:29:06.445 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:06.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.445 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.445 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.445 filename1: (groupid=0, jobs=1): err= 0: pid=1033386: Thu Jul 25 19:19:57 2024 00:29:06.445 read: IOPS=66, BW=266KiB/s (272kB/s)(2688KiB/10124msec) 00:29:06.445 slat (usec): min=6, max=192, avg=43.23, stdev=22.38 00:29:06.445 clat (msec): min=165, max=280, avg=240.65, stdev=34.63 00:29:06.445 lat (msec): min=165, max=280, avg=240.70, stdev=34.62 00:29:06.445 clat percentiles (msec): 00:29:06.445 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 169], 20.00th=[ 226], 00:29:06.445 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 257], 00:29:06.445 | 70.00th=[ 259], 80.00th=[ 264], 90.00th=[ 275], 95.00th=[ 279], 00:29:06.445 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 279], 99.95th=[ 279], 00:29:06.445 | 99.99th=[ 279] 00:29:06.445 bw ( KiB/s): min= 256, max= 384, per=4.01%, avg=262.40, stdev=28.62, samples=20 00:29:06.445 iops : min= 64, max= 96, avg=65.60, stdev= 7.16, samples=20 00:29:06.445 lat (msec) : 250=38.10%, 500=61.90% 00:29:06.445 cpu : usr=97.43%, sys=1.75%, ctx=47, majf=0, minf=46 00:29:06.445 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:06.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.445 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.445 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.445 filename1: (groupid=0, jobs=1): err= 0: pid=1033387: Thu Jul 25 19:19:57 2024 00:29:06.445 read: IOPS=66, BW=265KiB/s (271kB/s)(2680KiB/10130msec) 00:29:06.445 slat (nsec): min=9380, max=81043, avg=29898.84, stdev=13455.30 00:29:06.445 clat (msec): min=165, max=308, avg=241.33, stdev=35.07 00:29:06.445 lat (msec): min=165, max=308, avg=241.35, stdev=35.07 00:29:06.445 clat percentiles (msec): 00:29:06.445 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 171], 20.00th=[ 226], 00:29:06.445 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 257], 00:29:06.445 | 70.00th=[ 259], 80.00th=[ 266], 90.00th=[ 279], 95.00th=[ 279], 00:29:06.445 | 99.00th=[ 284], 99.50th=[ 284], 99.90th=[ 309], 99.95th=[ 309], 00:29:06.445 | 99.99th=[ 309] 00:29:06.445 bw ( KiB/s): min= 128, max= 384, per=3.99%, avg=261.60, stdev=50.67, samples=20 00:29:06.445 iops : min= 32, max= 96, avg=65.40, stdev=12.67, samples=20 00:29:06.445 lat (msec) : 250=40.00%, 500=60.00% 00:29:06.445 cpu : usr=98.29%, sys=1.31%, ctx=18, majf=0, minf=46 00:29:06.445 IO depths : 1=2.7%, 2=9.0%, 4=25.1%, 8=53.6%, 16=9.7%, 32=0.0%, >=64=0.0% 00:29:06.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.445 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.445 issued rwts: total=670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.446 filename1: (groupid=0, jobs=1): err= 0: pid=1033388: Thu Jul 25 19:19:57 2024 00:29:06.446 read: IOPS=70, BW=280KiB/s (287kB/s)(2816KiB/10040msec) 00:29:06.446 slat (nsec): min=8549, max=81257, avg=22020.73, stdev=9678.27 00:29:06.446 clat (msec): min=56, max=412, avg=227.99, stdev=60.54 00:29:06.446 lat (msec): min=56, max=412, avg=228.01, stdev=60.54 00:29:06.446 clat percentiles (msec): 00:29:06.446 | 1.00th=[ 57], 5.00th=[ 111], 10.00th=[ 153], 20.00th=[ 171], 00:29:06.446 | 30.00th=[ 203], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 255], 00:29:06.446 | 70.00th=[ 259], 80.00th=[ 264], 90.00th=[ 279], 95.00th=[ 284], 00:29:06.446 | 99.00th=[ 397], 99.50th=[ 405], 99.90th=[ 414], 99.95th=[ 414], 00:29:06.446 | 99.99th=[ 414] 00:29:06.446 bw ( KiB/s): min= 128, max= 512, per=4.21%, avg=275.20, stdev=82.34, samples=20 00:29:06.446 iops : min= 32, max= 128, avg=68.80, stdev=20.59, samples=20 00:29:06.446 lat (msec) : 100=4.55%, 250=41.19%, 500=54.26% 00:29:06.446 cpu : usr=98.24%, sys=1.33%, ctx=14, majf=0, minf=44 00:29:06.446 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:29:06.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.446 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.446 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.446 filename1: (groupid=0, jobs=1): err= 0: pid=1033389: Thu Jul 25 19:19:57 2024 00:29:06.446 read: IOPS=72, BW=290KiB/s (297kB/s)(2944KiB/10162msec) 00:29:06.446 slat (usec): min=8, max=427, avg=36.67, stdev=34.87 00:29:06.446 clat (msec): min=62, max=333, avg=220.59, stdev=52.19 00:29:06.446 lat (msec): min=62, max=333, avg=220.63, stdev=52.20 00:29:06.446 clat percentiles (msec): 00:29:06.446 | 1.00th=[ 63], 5.00th=[ 153], 10.00th=[ 165], 20.00th=[ 169], 00:29:06.446 | 30.00th=[ 176], 40.00th=[ 230], 50.00th=[ 249], 60.00th=[ 253], 00:29:06.446 | 70.00th=[ 257], 80.00th=[ 259], 90.00th=[ 266], 95.00th=[ 279], 00:29:06.446 | 99.00th=[ 279], 99.50th=[ 317], 99.90th=[ 334], 99.95th=[ 334], 00:29:06.446 | 99.99th=[ 334] 00:29:06.446 bw ( KiB/s): min= 240, max= 496, per=4.41%, avg=288.00, stdev=69.26, samples=20 00:29:06.446 iops : min= 60, max= 124, avg=72.00, stdev=17.31, samples=20 00:29:06.446 lat (msec) : 100=4.35%, 250=49.73%, 500=45.92% 00:29:06.446 cpu : usr=95.86%, sys=2.40%, ctx=46, majf=0, minf=44 00:29:06.446 IO depths : 1=4.3%, 2=10.5%, 4=24.9%, 8=52.2%, 16=8.2%, 32=0.0%, >=64=0.0% 00:29:06.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.446 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.446 issued rwts: total=736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.446 filename1: (groupid=0, jobs=1): err= 0: pid=1033390: Thu Jul 25 19:19:57 2024 00:29:06.446 read: IOPS=64, BW=258KiB/s (265kB/s)(2616KiB/10121msec) 00:29:06.446 slat (usec): min=7, max=106, avg=32.40, stdev=15.37 00:29:06.446 clat (msec): min=125, max=381, avg=246.94, stdev=42.45 00:29:06.446 lat (msec): min=125, max=381, avg=246.97, stdev=42.45 00:29:06.446 clat percentiles (msec): 00:29:06.446 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 171], 20.00th=[ 215], 00:29:06.446 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 259], 00:29:06.446 | 70.00th=[ 259], 80.00th=[ 266], 90.00th=[ 279], 95.00th=[ 309], 00:29:06.446 | 99.00th=[ 380], 99.50th=[ 380], 99.90th=[ 380], 99.95th=[ 380], 00:29:06.446 | 99.99th=[ 380] 00:29:06.446 bw ( KiB/s): min= 128, max= 384, per=3.92%, avg=256.00, stdev=58.73, samples=20 00:29:06.446 iops : min= 32, max= 96, avg=64.00, stdev=14.68, samples=20 00:29:06.446 lat (msec) : 250=37.92%, 500=62.08% 00:29:06.446 cpu : usr=98.38%, sys=1.14%, ctx=32, majf=0, minf=36 00:29:06.446 IO depths : 1=3.8%, 2=10.1%, 4=25.1%, 8=52.4%, 16=8.6%, 32=0.0%, >=64=0.0% 00:29:06.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.446 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.446 issued rwts: total=654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.446 filename1: (groupid=0, jobs=1): err= 0: pid=1033391: Thu Jul 25 19:19:57 2024 00:29:06.446 read: IOPS=67, BW=271KiB/s (277kB/s)(2752KiB/10158msec) 00:29:06.446 slat (usec): min=4, max=114, avg=69.86, stdev=19.01 00:29:06.446 clat (msec): min=66, max=460, avg=233.81, stdev=57.17 00:29:06.446 lat (msec): min=66, max=460, avg=233.88, stdev=57.18 00:29:06.446 clat percentiles (msec): 00:29:06.446 | 1.00th=[ 67], 5.00th=[ 125], 10.00th=[ 167], 20.00th=[ 180], 00:29:06.446 | 30.00th=[ 239], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 255], 00:29:06.446 | 70.00th=[ 257], 80.00th=[ 266], 90.00th=[ 275], 95.00th=[ 279], 00:29:06.446 | 99.00th=[ 430], 99.50th=[ 456], 99.90th=[ 460], 99.95th=[ 460], 00:29:06.446 | 99.99th=[ 460] 00:29:06.446 bw ( KiB/s): min= 144, max= 512, per=4.10%, avg=268.80, stdev=68.98, samples=20 00:29:06.446 iops : min= 36, max= 128, avg=67.20, stdev=17.25, samples=20 00:29:06.446 lat (msec) : 100=4.36%, 250=36.48%, 500=59.16% 00:29:06.446 cpu : usr=98.12%, sys=1.32%, ctx=46, majf=0, minf=50 00:29:06.446 IO depths : 1=3.3%, 2=7.8%, 4=22.2%, 8=57.0%, 16=9.6%, 32=0.0%, >=64=0.0% 00:29:06.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.446 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.446 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.446 filename2: (groupid=0, jobs=1): err= 0: pid=1033392: Thu Jul 25 19:19:57 2024 00:29:06.446 read: IOPS=64, BW=260KiB/s (266kB/s)(2624KiB/10110msec) 00:29:06.446 slat (nsec): min=8991, max=75923, avg=26813.09, stdev=13625.86 00:29:06.446 clat (msec): min=110, max=405, avg=244.05, stdev=40.89 00:29:06.446 lat (msec): min=110, max=405, avg=244.07, stdev=40.88 00:29:06.446 clat percentiles (msec): 00:29:06.446 | 1.00th=[ 111], 5.00th=[ 169], 10.00th=[ 169], 20.00th=[ 228], 00:29:06.446 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 257], 00:29:06.446 | 70.00th=[ 259], 80.00th=[ 275], 90.00th=[ 279], 95.00th=[ 288], 00:29:06.446 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 405], 99.95th=[ 405], 00:29:06.446 | 99.99th=[ 405] 00:29:06.446 bw ( KiB/s): min= 128, max= 384, per=4.01%, avg=262.74, stdev=65.82, samples=19 00:29:06.446 iops : min= 32, max= 96, avg=65.68, stdev=16.46, samples=19 00:29:06.446 lat (msec) : 250=34.15%, 500=65.85% 00:29:06.446 cpu : usr=97.72%, sys=1.63%, ctx=60, majf=0, minf=53 00:29:06.446 IO depths : 1=2.0%, 2=8.2%, 4=25.0%, 8=54.3%, 16=10.5%, 32=0.0%, >=64=0.0% 00:29:06.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.446 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.446 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.446 filename2: (groupid=0, jobs=1): err= 0: pid=1033393: Thu Jul 25 19:19:57 2024 00:29:06.446 read: IOPS=64, BW=259KiB/s (266kB/s)(2624KiB/10118msec) 00:29:06.446 slat (nsec): min=8924, max=85228, avg=30043.20, stdev=15179.39 00:29:06.446 clat (msec): min=88, max=418, avg=244.02, stdev=42.52 00:29:06.446 lat (msec): min=88, max=418, avg=244.05, stdev=42.51 00:29:06.446 clat percentiles (msec): 00:29:06.446 | 1.00th=[ 120], 5.00th=[ 169], 10.00th=[ 169], 20.00th=[ 228], 00:29:06.446 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 257], 00:29:06.446 | 70.00th=[ 259], 80.00th=[ 268], 90.00th=[ 279], 95.00th=[ 284], 00:29:06.446 | 99.00th=[ 300], 99.50th=[ 414], 99.90th=[ 418], 99.95th=[ 418], 00:29:06.446 | 99.99th=[ 418] 00:29:06.446 bw ( KiB/s): min= 240, max= 384, per=3.99%, avg=261.89, stdev=30.74, samples=19 00:29:06.446 iops : min= 60, max= 96, avg=65.47, stdev= 7.68, samples=19 00:29:06.446 lat (msec) : 100=0.61%, 250=34.15%, 500=65.24% 00:29:06.446 cpu : usr=98.09%, sys=1.36%, ctx=47, majf=0, minf=56 00:29:06.446 IO depths : 1=2.9%, 2=9.1%, 4=25.0%, 8=53.4%, 16=9.6%, 32=0.0%, >=64=0.0% 00:29:06.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.446 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.446 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.446 filename2: (groupid=0, jobs=1): err= 0: pid=1033394: Thu Jul 25 19:19:57 2024 00:29:06.446 read: IOPS=66, BW=264KiB/s (271kB/s)(2680KiB/10143msec) 00:29:06.446 slat (nsec): min=8923, max=77797, avg=36219.83, stdev=13987.41 00:29:06.446 clat (msec): min=125, max=325, avg=241.55, stdev=35.59 00:29:06.446 lat (msec): min=125, max=325, avg=241.59, stdev=35.58 00:29:06.446 clat percentiles (msec): 00:29:06.446 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 169], 20.00th=[ 228], 00:29:06.446 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 257], 00:29:06.446 | 70.00th=[ 259], 80.00th=[ 266], 90.00th=[ 279], 95.00th=[ 279], 00:29:06.446 | 99.00th=[ 284], 99.50th=[ 317], 99.90th=[ 326], 99.95th=[ 326], 00:29:06.446 | 99.99th=[ 326] 00:29:06.446 bw ( KiB/s): min= 128, max= 384, per=3.99%, avg=261.60, stdev=48.77, samples=20 00:29:06.446 iops : min= 32, max= 96, avg=65.40, stdev=12.19, samples=20 00:29:06.446 lat (msec) : 250=40.30%, 500=59.70% 00:29:06.446 cpu : usr=98.06%, sys=1.44%, ctx=20, majf=0, minf=41 00:29:06.446 IO depths : 1=4.9%, 2=11.2%, 4=25.1%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:29:06.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.446 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.446 issued rwts: total=670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.446 filename2: (groupid=0, jobs=1): err= 0: pid=1033395: Thu Jul 25 19:19:57 2024 00:29:06.446 read: IOPS=66, BW=265KiB/s (272kB/s)(2688KiB/10135msec) 00:29:06.446 slat (usec): min=7, max=112, avg=34.59, stdev=20.53 00:29:06.446 clat (msec): min=83, max=412, avg=241.00, stdev=56.51 00:29:06.446 lat (msec): min=83, max=412, avg=241.03, stdev=56.50 00:29:06.446 clat percentiles (msec): 00:29:06.446 | 1.00th=[ 85], 5.00th=[ 159], 10.00th=[ 169], 20.00th=[ 171], 00:29:06.447 | 30.00th=[ 239], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 255], 00:29:06.447 | 70.00th=[ 257], 80.00th=[ 259], 90.00th=[ 279], 95.00th=[ 376], 00:29:06.447 | 99.00th=[ 414], 99.50th=[ 414], 99.90th=[ 414], 99.95th=[ 414], 00:29:06.447 | 99.99th=[ 414] 00:29:06.447 bw ( KiB/s): min= 128, max= 384, per=4.01%, avg=262.40, stdev=74.94, samples=20 00:29:06.447 iops : min= 32, max= 96, avg=65.60, stdev=18.73, samples=20 00:29:06.447 lat (msec) : 100=1.34%, 250=42.11%, 500=56.55% 00:29:06.447 cpu : usr=96.14%, sys=2.34%, ctx=143, majf=0, minf=50 00:29:06.447 IO depths : 1=5.1%, 2=11.2%, 4=24.4%, 8=51.9%, 16=7.4%, 32=0.0%, >=64=0.0% 00:29:06.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.447 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.447 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.447 filename2: (groupid=0, jobs=1): err= 0: pid=1033396: Thu Jul 25 19:19:57 2024 00:29:06.447 read: IOPS=64, BW=259KiB/s (266kB/s)(2624KiB/10119msec) 00:29:06.447 slat (nsec): min=9344, max=55648, avg=24460.82, stdev=8163.85 00:29:06.447 clat (msec): min=165, max=298, avg=246.44, stdev=33.56 00:29:06.447 lat (msec): min=165, max=299, avg=246.46, stdev=33.56 00:29:06.447 clat percentiles (msec): 00:29:06.447 | 1.00th=[ 169], 5.00th=[ 169], 10.00th=[ 169], 20.00th=[ 241], 00:29:06.447 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 257], 00:29:06.447 | 70.00th=[ 259], 80.00th=[ 266], 90.00th=[ 279], 95.00th=[ 288], 00:29:06.447 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 300], 99.95th=[ 300], 00:29:06.447 | 99.99th=[ 300] 00:29:06.447 bw ( KiB/s): min= 144, max= 384, per=3.92%, avg=256.00, stdev=53.70, samples=20 00:29:06.447 iops : min= 36, max= 96, avg=64.00, stdev=13.42, samples=20 00:29:06.447 lat (msec) : 250=34.15%, 500=65.85% 00:29:06.447 cpu : usr=96.97%, sys=1.85%, ctx=42, majf=0, minf=59 00:29:06.447 IO depths : 1=0.3%, 2=6.6%, 4=25.0%, 8=55.9%, 16=12.2%, 32=0.0%, >=64=0.0% 00:29:06.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.447 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.447 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.447 filename2: (groupid=0, jobs=1): err= 0: pid=1033397: Thu Jul 25 19:19:57 2024 00:29:06.447 read: IOPS=81, BW=327KiB/s (335kB/s)(3288KiB/10046msec) 00:29:06.447 slat (usec): min=5, max=256, avg=22.29, stdev=21.40 00:29:06.447 clat (msec): min=63, max=383, avg=195.36, stdev=50.20 00:29:06.447 lat (msec): min=63, max=383, avg=195.38, stdev=50.20 00:29:06.447 clat percentiles (msec): 00:29:06.447 | 1.00th=[ 64], 5.00th=[ 108], 10.00th=[ 140], 20.00th=[ 163], 00:29:06.447 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 188], 60.00th=[ 201], 00:29:06.447 | 70.00th=[ 239], 80.00th=[ 253], 90.00th=[ 257], 95.00th=[ 266], 00:29:06.447 | 99.00th=[ 279], 99.50th=[ 300], 99.90th=[ 384], 99.95th=[ 384], 00:29:06.447 | 99.99th=[ 384] 00:29:06.447 bw ( KiB/s): min= 128, max= 512, per=4.93%, avg=322.40, stdev=87.37, samples=20 00:29:06.447 iops : min= 32, max= 128, avg=80.60, stdev=21.84, samples=20 00:29:06.447 lat (msec) : 100=3.89%, 250=73.48%, 500=22.63% 00:29:06.447 cpu : usr=96.99%, sys=1.78%, ctx=60, majf=0, minf=87 00:29:06.447 IO depths : 1=2.6%, 2=6.9%, 4=19.3%, 8=61.2%, 16=10.0%, 32=0.0%, >=64=0.0% 00:29:06.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.447 complete : 0=0.0%, 4=92.6%, 8=1.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.447 issued rwts: total=822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.447 filename2: (groupid=0, jobs=1): err= 0: pid=1033398: Thu Jul 25 19:19:57 2024 00:29:06.447 read: IOPS=66, BW=266KiB/s (272kB/s)(2688KiB/10121msec) 00:29:06.447 slat (nsec): min=9547, max=73489, avg=37561.64, stdev=14464.76 00:29:06.447 clat (msec): min=163, max=367, avg=240.63, stdev=36.44 00:29:06.447 lat (msec): min=163, max=367, avg=240.67, stdev=36.43 00:29:06.447 clat percentiles (msec): 00:29:06.447 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 169], 20.00th=[ 226], 00:29:06.447 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 255], 00:29:06.447 | 70.00th=[ 259], 80.00th=[ 264], 90.00th=[ 279], 95.00th=[ 279], 00:29:06.447 | 99.00th=[ 279], 99.50th=[ 368], 99.90th=[ 368], 99.95th=[ 368], 00:29:06.447 | 99.99th=[ 368] 00:29:06.447 bw ( KiB/s): min= 256, max= 384, per=4.01%, avg=262.40, stdev=28.62, samples=20 00:29:06.447 iops : min= 64, max= 96, avg=65.60, stdev= 7.16, samples=20 00:29:06.447 lat (msec) : 250=38.69%, 500=61.31% 00:29:06.447 cpu : usr=98.24%, sys=1.28%, ctx=48, majf=0, minf=32 00:29:06.447 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:29:06.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.447 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.447 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.447 filename2: (groupid=0, jobs=1): err= 0: pid=1033399: Thu Jul 25 19:19:57 2024 00:29:06.447 read: IOPS=69, BW=277KiB/s (284kB/s)(2816KiB/10157msec) 00:29:06.447 slat (usec): min=7, max=186, avg=45.72, stdev=27.66 00:29:06.447 clat (msec): min=57, max=366, avg=230.44, stdev=52.20 00:29:06.447 lat (msec): min=57, max=366, avg=230.48, stdev=52.20 00:29:06.447 clat percentiles (msec): 00:29:06.447 | 1.00th=[ 58], 5.00th=[ 148], 10.00th=[ 167], 20.00th=[ 169], 00:29:06.447 | 30.00th=[ 228], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 255], 00:29:06.447 | 70.00th=[ 259], 80.00th=[ 266], 90.00th=[ 279], 95.00th=[ 279], 00:29:06.447 | 99.00th=[ 309], 99.50th=[ 359], 99.90th=[ 368], 99.95th=[ 368], 00:29:06.447 | 99.99th=[ 368] 00:29:06.447 bw ( KiB/s): min= 256, max= 512, per=4.21%, avg=275.20, stdev=62.64, samples=20 00:29:06.447 iops : min= 64, max= 128, avg=68.80, stdev=15.66, samples=20 00:29:06.447 lat (msec) : 100=4.55%, 250=39.49%, 500=55.97% 00:29:06.447 cpu : usr=97.39%, sys=1.63%, ctx=109, majf=0, minf=40 00:29:06.447 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:29:06.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.447 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.447 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.447 00:29:06.447 Run status group 0 (all jobs): 00:29:06.447 READ: bw=6536KiB/s (6693kB/s), 258KiB/s-327KiB/s (265kB/s-335kB/s), io=64.9MiB (68.0MB), run=10008-10162msec 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:06.447 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.448 bdev_null0 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.448 [2024-07-25 19:19:57.965930] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.448 bdev_null1 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.448 19:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:06.448 { 00:29:06.448 "params": { 00:29:06.448 "name": "Nvme$subsystem", 00:29:06.448 "trtype": "$TEST_TRANSPORT", 00:29:06.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.448 "adrfam": "ipv4", 00:29:06.448 "trsvcid": "$NVMF_PORT", 00:29:06.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.448 "hdgst": ${hdgst:-false}, 00:29:06.448 "ddgst": ${ddgst:-false} 00:29:06.448 }, 00:29:06.448 "method": "bdev_nvme_attach_controller" 00:29:06.448 } 00:29:06.448 EOF 00:29:06.448 )") 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:06.448 { 00:29:06.448 "params": { 00:29:06.448 "name": "Nvme$subsystem", 00:29:06.448 "trtype": "$TEST_TRANSPORT", 00:29:06.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.448 "adrfam": "ipv4", 00:29:06.448 "trsvcid": "$NVMF_PORT", 00:29:06.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.448 "hdgst": ${hdgst:-false}, 00:29:06.448 "ddgst": ${ddgst:-false} 00:29:06.448 }, 00:29:06.448 "method": "bdev_nvme_attach_controller" 00:29:06.448 } 00:29:06.448 EOF 00:29:06.448 )") 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:06.448 "params": { 00:29:06.448 "name": "Nvme0", 00:29:06.448 "trtype": "tcp", 00:29:06.448 "traddr": "10.0.0.2", 00:29:06.448 "adrfam": "ipv4", 00:29:06.448 "trsvcid": "4420", 00:29:06.448 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:06.448 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:06.448 "hdgst": false, 00:29:06.448 "ddgst": false 00:29:06.448 }, 00:29:06.448 "method": "bdev_nvme_attach_controller" 00:29:06.448 },{ 00:29:06.448 "params": { 00:29:06.448 "name": "Nvme1", 00:29:06.448 "trtype": "tcp", 00:29:06.448 "traddr": "10.0.0.2", 00:29:06.448 "adrfam": "ipv4", 00:29:06.448 "trsvcid": "4420", 00:29:06.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:06.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:06.448 "hdgst": false, 00:29:06.448 "ddgst": false 00:29:06.448 }, 00:29:06.448 "method": "bdev_nvme_attach_controller" 00:29:06.448 }' 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:06.448 19:19:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:06.448 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:06.448 ... 00:29:06.448 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:06.448 ... 00:29:06.448 fio-3.35 00:29:06.449 Starting 4 threads 00:29:06.449 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.710 00:29:11.710 filename0: (groupid=0, jobs=1): err= 0: pid=1034903: Thu Jul 25 19:20:04 2024 00:29:11.710 read: IOPS=1853, BW=14.5MiB/s (15.2MB/s)(72.4MiB/5002msec) 00:29:11.710 slat (nsec): min=6859, max=50038, avg=10899.74, stdev=4142.30 00:29:11.710 clat (usec): min=1628, max=10497, avg=4279.94, stdev=628.25 00:29:11.710 lat (usec): min=1641, max=10517, avg=4290.84, stdev=628.16 00:29:11.710 clat percentiles (usec): 00:29:11.710 | 1.00th=[ 2966], 5.00th=[ 3392], 10.00th=[ 3654], 20.00th=[ 3884], 00:29:11.710 | 30.00th=[ 4047], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:29:11.710 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 4883], 95.00th=[ 5538], 00:29:11.710 | 99.00th=[ 6390], 99.50th=[ 6718], 99.90th=[ 7701], 99.95th=[ 8356], 00:29:11.710 | 99.99th=[10552] 00:29:11.710 bw ( KiB/s): min=14300, max=15360, per=25.80%, avg=14826.80, stdev=331.98, samples=10 00:29:11.710 iops : min= 1787, max= 1920, avg=1853.30, stdev=41.59, samples=10 00:29:11.710 lat (msec) : 2=0.04%, 4=27.05%, 10=72.89%, 20=0.02% 00:29:11.710 cpu : usr=92.14%, sys=7.14%, ctx=7, majf=0, minf=9 00:29:11.710 IO depths : 1=0.1%, 2=3.6%, 4=69.2%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:11.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:11.710 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:11.710 issued rwts: total=9273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:11.710 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:11.710 filename0: (groupid=0, jobs=1): err= 0: pid=1034904: Thu Jul 25 19:20:04 2024 00:29:11.710 read: IOPS=1783, BW=13.9MiB/s (14.6MB/s)(69.7MiB/5001msec) 00:29:11.710 slat (nsec): min=6870, max=46870, avg=11109.10, stdev=4025.04 00:29:11.710 clat (usec): min=835, max=7578, avg=4451.85, stdev=705.35 00:29:11.710 lat (usec): min=844, max=7592, avg=4462.96, stdev=705.04 00:29:11.710 clat percentiles (usec): 00:29:11.710 | 1.00th=[ 3130], 5.00th=[ 3621], 10.00th=[ 3818], 20.00th=[ 4015], 00:29:11.710 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:29:11.710 | 70.00th=[ 4490], 80.00th=[ 4752], 90.00th=[ 5407], 95.00th=[ 6128], 00:29:11.711 | 99.00th=[ 6783], 99.50th=[ 6980], 99.90th=[ 7177], 99.95th=[ 7570], 00:29:11.711 | 99.99th=[ 7570] 00:29:11.711 bw ( KiB/s): min=13739, max=14880, per=24.83%, avg=14268.30, stdev=306.76, samples=10 00:29:11.711 iops : min= 1717, max= 1860, avg=1783.50, stdev=38.42, samples=10 00:29:11.711 lat (usec) : 1000=0.02% 00:29:11.711 lat (msec) : 2=0.03%, 4=17.92%, 10=82.02% 00:29:11.711 cpu : usr=92.54%, sys=6.98%, ctx=9, majf=0, minf=9 00:29:11.711 IO depths : 1=0.1%, 2=2.0%, 4=68.7%, 8=29.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:11.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:11.711 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:11.711 issued rwts: total=8921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:11.711 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:11.711 filename1: (groupid=0, jobs=1): err= 0: pid=1034905: Thu Jul 25 19:20:04 2024 00:29:11.711 read: IOPS=1745, BW=13.6MiB/s (14.3MB/s)(68.2MiB/5002msec) 00:29:11.711 slat (nsec): min=6723, max=42294, avg=11490.99, stdev=4284.61 00:29:11.711 clat (usec): min=2594, max=7602, avg=4550.64, stdev=696.95 00:29:11.711 lat (usec): min=2603, max=7616, avg=4562.13, stdev=696.82 00:29:11.711 clat percentiles (usec): 00:29:11.711 | 1.00th=[ 3326], 5.00th=[ 3720], 10.00th=[ 3916], 20.00th=[ 4080], 00:29:11.711 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4490], 00:29:11.711 | 70.00th=[ 4686], 80.00th=[ 4948], 90.00th=[ 5473], 95.00th=[ 6063], 00:29:11.711 | 99.00th=[ 6849], 99.50th=[ 7111], 99.90th=[ 7504], 99.95th=[ 7570], 00:29:11.711 | 99.99th=[ 7635] 00:29:11.711 bw ( KiB/s): min=12368, max=15104, per=24.28%, avg=13955.56, stdev=854.91, samples=9 00:29:11.711 iops : min= 1546, max= 1888, avg=1744.44, stdev=106.86, samples=9 00:29:11.711 lat (msec) : 4=14.81%, 10=85.19% 00:29:11.711 cpu : usr=92.36%, sys=7.12%, ctx=12, majf=0, minf=9 00:29:11.711 IO depths : 1=0.1%, 2=1.4%, 4=68.7%, 8=29.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:11.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:11.711 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:11.711 issued rwts: total=8729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:11.711 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:11.711 filename1: (groupid=0, jobs=1): err= 0: pid=1034906: Thu Jul 25 19:20:04 2024 00:29:11.711 read: IOPS=1802, BW=14.1MiB/s (14.8MB/s)(70.5MiB/5003msec) 00:29:11.711 slat (nsec): min=5032, max=41321, avg=10691.58, stdev=4070.82 00:29:11.711 clat (usec): min=1644, max=8614, avg=4403.47, stdev=646.87 00:29:11.711 lat (usec): min=1661, max=8638, avg=4414.16, stdev=646.84 00:29:11.711 clat percentiles (usec): 00:29:11.711 | 1.00th=[ 3195], 5.00th=[ 3654], 10.00th=[ 3818], 20.00th=[ 4015], 00:29:11.711 | 30.00th=[ 4113], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:29:11.711 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 5145], 95.00th=[ 5800], 00:29:11.711 | 99.00th=[ 6718], 99.50th=[ 7046], 99.90th=[ 8455], 99.95th=[ 8586], 00:29:11.711 | 99.99th=[ 8586] 00:29:11.711 bw ( KiB/s): min=13584, max=14928, per=25.10%, avg=14425.60, stdev=409.33, samples=10 00:29:11.711 iops : min= 1698, max= 1866, avg=1803.20, stdev=51.17, samples=10 00:29:11.711 lat (msec) : 2=0.02%, 4=20.00%, 10=79.98% 00:29:11.711 cpu : usr=92.02%, sys=7.34%, ctx=17, majf=0, minf=9 00:29:11.711 IO depths : 1=0.1%, 2=2.2%, 4=70.3%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:11.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:11.711 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:11.711 issued rwts: total=9019,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:11.711 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:11.711 00:29:11.711 Run status group 0 (all jobs): 00:29:11.711 READ: bw=56.1MiB/s (58.9MB/s), 13.6MiB/s-14.5MiB/s (14.3MB/s-15.2MB/s), io=281MiB (294MB), run=5001-5003msec 00:29:12.276 19:20:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:29:12.276 19:20:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:12.276 19:20:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:12.276 19:20:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:12.276 19:20:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:12.276 19:20:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:12.276 19:20:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.276 19:20:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:12.276 19:20:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.276 19:20:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:12.276 19:20:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.276 19:20:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:12.276 19:20:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.276 19:20:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:12.276 19:20:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:12.276 19:20:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:12.276 19:20:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:12.276 19:20:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.276 19:20:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:12.276 19:20:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.276 19:20:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:12.276 19:20:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.276 19:20:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:12.276 19:20:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.276 00:29:12.276 real 0m24.830s 00:29:12.276 user 4m34.837s 00:29:12.276 sys 0m7.217s 00:29:12.276 19:20:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:12.276 19:20:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:12.276 ************************************ 00:29:12.276 END TEST fio_dif_rand_params 00:29:12.276 ************************************ 00:29:12.276 19:20:04 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:29:12.276 19:20:04 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:12.276 19:20:04 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:12.276 19:20:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:12.276 ************************************ 00:29:12.276 START TEST fio_dif_digest 00:29:12.276 ************************************ 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:12.276 bdev_null0 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:12.276 [2024-07-25 19:20:04.582049] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:12.276 19:20:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:12.276 { 00:29:12.276 "params": { 00:29:12.276 "name": "Nvme$subsystem", 00:29:12.276 "trtype": "$TEST_TRANSPORT", 00:29:12.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.276 "adrfam": "ipv4", 00:29:12.276 "trsvcid": "$NVMF_PORT", 00:29:12.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.276 "hdgst": ${hdgst:-false}, 00:29:12.276 "ddgst": ${ddgst:-false} 00:29:12.276 }, 00:29:12.277 "method": "bdev_nvme_attach_controller" 00:29:12.277 } 00:29:12.277 EOF 00:29:12.277 )") 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:12.277 "params": { 00:29:12.277 "name": "Nvme0", 00:29:12.277 "trtype": "tcp", 00:29:12.277 "traddr": "10.0.0.2", 00:29:12.277 "adrfam": "ipv4", 00:29:12.277 "trsvcid": "4420", 00:29:12.277 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:12.277 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:12.277 "hdgst": true, 00:29:12.277 "ddgst": true 00:29:12.277 }, 00:29:12.277 "method": "bdev_nvme_attach_controller" 00:29:12.277 }' 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:12.277 19:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:12.535 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:12.535 ... 00:29:12.535 fio-3.35 00:29:12.535 Starting 3 threads 00:29:12.535 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.754 00:29:24.754 filename0: (groupid=0, jobs=1): err= 0: pid=1035662: Thu Jul 25 19:20:15 2024 00:29:24.754 read: IOPS=209, BW=26.2MiB/s (27.5MB/s)(262MiB/10009msec) 00:29:24.754 slat (nsec): min=7673, max=56278, avg=17532.38, stdev=5825.31 00:29:24.754 clat (usec): min=8134, max=55826, avg=14282.38, stdev=2831.78 00:29:24.754 lat (usec): min=8145, max=55841, avg=14299.91, stdev=2831.78 00:29:24.754 clat percentiles (usec): 00:29:24.754 | 1.00th=[ 9503], 5.00th=[10421], 10.00th=[11076], 20.00th=[12911], 00:29:24.754 | 30.00th=[13698], 40.00th=[14222], 50.00th=[14484], 60.00th=[14877], 00:29:24.754 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16188], 95.00th=[16712], 00:29:24.754 | 99.00th=[17433], 99.50th=[18744], 99.90th=[54789], 99.95th=[54789], 00:29:24.754 | 99.99th=[55837] 00:29:24.754 bw ( KiB/s): min=21504, max=29184, per=35.57%, avg=26831.25, stdev=1600.07, samples=20 00:29:24.754 iops : min= 168, max= 228, avg=209.60, stdev=12.53, samples=20 00:29:24.754 lat (msec) : 10=2.53%, 20=97.00%, 50=0.19%, 100=0.29% 00:29:24.754 cpu : usr=92.43%, sys=7.06%, ctx=40, majf=0, minf=141 00:29:24.754 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:24.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.754 issued rwts: total=2099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:24.754 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:24.754 filename0: (groupid=0, jobs=1): err= 0: pid=1035663: Thu Jul 25 19:20:15 2024 00:29:24.754 read: IOPS=189, BW=23.7MiB/s (24.9MB/s)(238MiB/10044msec) 00:29:24.754 slat (nsec): min=6675, max=77475, avg=16396.11, stdev=5287.74 00:29:24.754 clat (usec): min=8340, max=98035, avg=15757.73, stdev=7337.13 00:29:24.754 lat (usec): min=8353, max=98068, avg=15774.13, stdev=7337.49 00:29:24.754 clat percentiles (usec): 00:29:24.754 | 1.00th=[ 9634], 5.00th=[10814], 10.00th=[12518], 20.00th=[13566], 00:29:24.754 | 30.00th=[14091], 40.00th=[14484], 50.00th=[14746], 60.00th=[15139], 00:29:24.754 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16581], 95.00th=[17433], 00:29:24.754 | 99.00th=[56361], 99.50th=[56886], 99.90th=[60031], 99.95th=[98042], 00:29:24.754 | 99.99th=[98042] 00:29:24.754 bw ( KiB/s): min=20992, max=28416, per=32.32%, avg=24384.00, stdev=1933.43, samples=20 00:29:24.754 iops : min= 164, max= 222, avg=190.50, stdev=15.10, samples=20 00:29:24.754 lat (msec) : 10=1.78%, 20=95.33%, 50=0.05%, 100=2.83% 00:29:24.754 cpu : usr=92.57%, sys=6.96%, ctx=20, majf=0, minf=165 00:29:24.754 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:24.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.754 issued rwts: total=1907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:24.754 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:24.754 filename0: (groupid=0, jobs=1): err= 0: pid=1035664: Thu Jul 25 19:20:15 2024 00:29:24.754 read: IOPS=190, BW=23.8MiB/s (25.0MB/s)(239MiB/10047msec) 00:29:24.754 slat (nsec): min=6178, max=62797, avg=20120.43, stdev=6431.71 00:29:24.754 clat (usec): min=8280, max=58411, avg=15691.30, stdev=6281.93 00:29:24.754 lat (usec): min=8300, max=58430, avg=15711.42, stdev=6281.92 00:29:24.754 clat percentiles (usec): 00:29:24.754 | 1.00th=[ 9372], 5.00th=[10290], 10.00th=[11600], 20.00th=[13829], 00:29:24.754 | 30.00th=[14484], 40.00th=[14877], 50.00th=[15139], 60.00th=[15533], 00:29:24.754 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16909], 95.00th=[17695], 00:29:24.754 | 99.00th=[56361], 99.50th=[57410], 99.90th=[57934], 99.95th=[58459], 00:29:24.754 | 99.99th=[58459] 00:29:24.754 bw ( KiB/s): min=20736, max=26624, per=32.46%, avg=24486.40, stdev=1576.50, samples=20 00:29:24.754 iops : min= 162, max= 208, avg=191.30, stdev=12.32, samples=20 00:29:24.754 lat (msec) : 10=3.29%, 20=94.52%, 50=0.10%, 100=2.09% 00:29:24.754 cpu : usr=93.53%, sys=5.94%, ctx=30, majf=0, minf=188 00:29:24.754 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:24.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.754 issued rwts: total=1915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:24.754 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:24.754 00:29:24.754 Run status group 0 (all jobs): 00:29:24.754 READ: bw=73.7MiB/s (77.2MB/s), 23.7MiB/s-26.2MiB/s (24.9MB/s-27.5MB/s), io=740MiB (776MB), run=10009-10047msec 00:29:24.754 19:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:29:24.754 19:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:29:24.754 19:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:29:24.754 19:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:24.754 19:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:29:24.754 19:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:24.754 19:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.754 19:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:24.754 19:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.754 19:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:24.754 19:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.754 19:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:24.754 19:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.754 00:29:24.754 real 0m11.378s 00:29:24.754 user 0m29.308s 00:29:24.754 sys 0m2.301s 00:29:24.754 19:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:24.754 19:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:24.754 ************************************ 00:29:24.754 END TEST fio_dif_digest 00:29:24.754 ************************************ 00:29:24.754 19:20:15 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:24.754 19:20:15 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:29:24.754 19:20:15 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:24.754 19:20:15 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:29:24.754 19:20:15 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:24.754 19:20:15 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:29:24.754 19:20:15 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:24.754 19:20:15 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:24.754 rmmod nvme_tcp 00:29:24.754 rmmod nvme_fabrics 00:29:24.754 rmmod nvme_keyring 00:29:24.754 19:20:16 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:24.754 19:20:16 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:29:24.754 19:20:16 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:29:24.755 19:20:16 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1029482 ']' 00:29:24.755 19:20:16 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1029482 00:29:24.755 19:20:16 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1029482 ']' 00:29:24.755 19:20:16 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1029482 00:29:24.755 19:20:16 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:29:24.755 19:20:16 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:24.755 19:20:16 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1029482 00:29:24.755 19:20:16 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:24.755 19:20:16 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:24.755 19:20:16 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1029482' 00:29:24.755 killing process with pid 1029482 00:29:24.755 19:20:16 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1029482 00:29:24.755 19:20:16 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1029482 00:29:24.755 19:20:16 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:24.755 19:20:16 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:25.013 Waiting for block devices as requested 00:29:25.013 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:25.271 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:25.271 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:25.272 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:25.530 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:25.530 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:25.530 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:25.530 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:25.789 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:29:25.789 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:25.789 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:26.047 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:26.047 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:26.047 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:26.305 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:26.305 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:26.305 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:26.562 19:20:18 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:26.562 19:20:18 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:26.562 19:20:18 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:26.562 19:20:18 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:26.562 19:20:18 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.562 19:20:18 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:26.562 19:20:18 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.465 19:20:20 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:28.465 00:29:28.465 real 1m8.468s 00:29:28.465 user 6m31.758s 00:29:28.465 sys 0m19.633s 00:29:28.466 19:20:20 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:28.466 19:20:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:28.466 ************************************ 00:29:28.466 END TEST nvmf_dif 00:29:28.466 ************************************ 00:29:28.466 19:20:20 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:28.466 19:20:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:28.466 19:20:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:28.466 19:20:20 -- common/autotest_common.sh@10 -- # set +x 00:29:28.466 ************************************ 00:29:28.466 START TEST nvmf_abort_qd_sizes 00:29:28.466 ************************************ 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:28.466 * Looking for test storage... 00:29:28.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:28.466 19:20:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.725 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:28.725 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:28.725 19:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:29:28.725 19:20:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:31.259 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:31.259 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:31.259 Found net devices under 0000:09:00.0: cvl_0_0 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.259 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:31.260 Found net devices under 0000:09:00.1: cvl_0_1 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:31.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:31.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:29:31.260 00:29:31.260 --- 10.0.0.2 ping statistics --- 00:29:31.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.260 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:31.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:31.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:29:31.260 00:29:31.260 --- 10.0.0.1 ping statistics --- 00:29:31.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.260 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:31.260 19:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:32.636 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:32.636 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:32.636 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:32.636 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:32.636 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:32.636 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:32.636 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:32.636 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:32.636 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:32.636 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:32.636 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:32.636 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:32.636 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:32.636 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:32.636 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:32.636 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:33.572 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:29:33.830 19:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:33.830 19:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:33.830 19:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:33.830 19:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:33.830 19:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:33.830 19:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:33.830 19:20:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:29:33.830 19:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:33.830 19:20:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:33.830 19:20:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:33.830 19:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1041068 00:29:33.830 19:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:29:33.830 19:20:26 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1041068 00:29:33.830 19:20:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1041068 ']' 00:29:33.830 19:20:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.830 19:20:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:33.830 19:20:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.830 19:20:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:33.830 19:20:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:33.830 [2024-07-25 19:20:26.153308] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:33.831 [2024-07-25 19:20:26.153399] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.831 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.831 [2024-07-25 19:20:26.228300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:34.088 [2024-07-25 19:20:26.342480] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:34.088 [2024-07-25 19:20:26.342533] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:34.088 [2024-07-25 19:20:26.342546] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:34.088 [2024-07-25 19:20:26.342557] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:34.088 [2024-07-25 19:20:26.342567] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:34.088 [2024-07-25 19:20:26.342641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.088 [2024-07-25 19:20:26.342701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:34.088 [2024-07-25 19:20:26.342766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:34.088 [2024-07-25 19:20:26.342769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.653 19:20:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:34.654 19:20:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:29:34.654 19:20:27 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:34.654 19:20:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:34.654 19:20:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:34.654 19:20:27 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.654 19:20:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:29:34.654 19:20:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:29:34.654 19:20:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:29:34.654 19:20:27 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:29:34.654 19:20:27 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:29:34.654 19:20:27 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:0b:00.0 ]] 00:29:34.654 19:20:27 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:29:34.654 19:20:27 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:34.654 19:20:27 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:29:34.654 19:20:27 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:29:34.912 19:20:27 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:34.912 19:20:27 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:34.912 19:20:27 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:29:34.912 19:20:27 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:0b:00.0 00:29:34.912 19:20:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:29:34.912 19:20:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:29:34.912 19:20:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:29:34.912 19:20:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:34.912 19:20:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:34.912 19:20:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:34.912 ************************************ 00:29:34.912 START TEST spdk_target_abort 00:29:34.912 ************************************ 00:29:34.912 19:20:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:29:34.912 19:20:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:29:34.912 19:20:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:29:34.912 19:20:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.912 19:20:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:38.190 spdk_targetn1 00:29:38.190 19:20:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.190 19:20:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:38.190 19:20:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.190 19:20:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:38.190 [2024-07-25 19:20:30.005049] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:38.190 [2024-07-25 19:20:30.037713] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:38.190 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:38.191 19:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:38.191 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.466 Initializing NVMe Controllers 00:29:41.466 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:41.466 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:41.466 Initialization complete. Launching workers. 00:29:41.466 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10666, failed: 0 00:29:41.466 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1248, failed to submit 9418 00:29:41.466 success 860, unsuccess 388, failed 0 00:29:41.466 19:20:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:41.466 19:20:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:41.466 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.742 Initializing NVMe Controllers 00:29:44.742 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:44.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:44.742 Initialization complete. Launching workers. 00:29:44.742 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8671, failed: 0 00:29:44.742 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1234, failed to submit 7437 00:29:44.742 success 318, unsuccess 916, failed 0 00:29:44.742 19:20:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:44.742 19:20:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:44.742 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.021 Initializing NVMe Controllers 00:29:48.021 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:48.021 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:48.021 Initialization complete. Launching workers. 00:29:48.021 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31456, failed: 0 00:29:48.021 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2702, failed to submit 28754 00:29:48.021 success 531, unsuccess 2171, failed 0 00:29:48.021 19:20:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:48.021 19:20:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.021 19:20:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.021 19:20:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.021 19:20:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:48.021 19:20:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.021 19:20:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.952 19:20:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.952 19:20:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1041068 00:29:48.952 19:20:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1041068 ']' 00:29:48.952 19:20:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1041068 00:29:48.952 19:20:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:29:48.952 19:20:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:48.952 19:20:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1041068 00:29:48.952 19:20:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:48.952 19:20:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:48.952 19:20:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1041068' 00:29:48.952 killing process with pid 1041068 00:29:48.952 19:20:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1041068 00:29:48.952 19:20:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1041068 00:29:49.211 00:29:49.211 real 0m14.306s 00:29:49.211 user 0m55.346s 00:29:49.211 sys 0m3.180s 00:29:49.211 19:20:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:49.211 19:20:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:49.211 ************************************ 00:29:49.211 END TEST spdk_target_abort 00:29:49.211 ************************************ 00:29:49.211 19:20:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:49.211 19:20:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:49.211 19:20:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:49.211 19:20:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:49.211 ************************************ 00:29:49.211 START TEST kernel_target_abort 00:29:49.211 ************************************ 00:29:49.211 19:20:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:29:49.211 19:20:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:49.211 19:20:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:29:49.211 19:20:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:49.211 19:20:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:49.211 19:20:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.211 19:20:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.211 19:20:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:49.211 19:20:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.211 19:20:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:49.211 19:20:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:49.211 19:20:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:49.211 19:20:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:49.211 19:20:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:49.211 19:20:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:49.211 19:20:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:49.211 19:20:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:49.211 19:20:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:49.211 19:20:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:29:49.211 19:20:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:49.211 19:20:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:49.211 19:20:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:49.211 19:20:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:50.587 Waiting for block devices as requested 00:29:50.587 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:50.587 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:50.587 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:50.870 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:50.870 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:50.870 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:50.870 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:51.132 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:51.132 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:29:51.132 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:51.390 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:51.390 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:51.390 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:51.390 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:51.648 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:51.648 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:51.648 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:51.905 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:51.905 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:51.905 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:51.905 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:29:51.905 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:51.905 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:29:51.905 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:51.905 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:51.905 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:51.905 No valid GPT data, bailing 00:29:51.905 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:51.905 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:29:51.905 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:29:51.905 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:51.905 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:51.905 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:51.905 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:51.905 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:51.905 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:51.905 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:29:51.905 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:51.905 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:29:51.905 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:51.905 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:29:51.905 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:29:51.905 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:29:51.906 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:51.906 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:29:51.906 00:29:51.906 Discovery Log Number of Records 2, Generation counter 2 00:29:51.906 =====Discovery Log Entry 0====== 00:29:51.906 trtype: tcp 00:29:51.906 adrfam: ipv4 00:29:51.906 subtype: current discovery subsystem 00:29:51.906 treq: not specified, sq flow control disable supported 00:29:51.906 portid: 1 00:29:51.906 trsvcid: 4420 00:29:51.906 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:51.906 traddr: 10.0.0.1 00:29:51.906 eflags: none 00:29:51.906 sectype: none 00:29:51.906 =====Discovery Log Entry 1====== 00:29:51.906 trtype: tcp 00:29:51.906 adrfam: ipv4 00:29:51.906 subtype: nvme subsystem 00:29:51.906 treq: not specified, sq flow control disable supported 00:29:51.906 portid: 1 00:29:51.906 trsvcid: 4420 00:29:51.906 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:51.906 traddr: 10.0.0.1 00:29:51.906 eflags: none 00:29:51.906 sectype: none 00:29:51.906 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:51.906 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:51.906 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:51.906 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:51.906 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:51.906 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:51.906 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:51.906 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:51.906 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:51.906 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:51.906 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:51.906 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:51.906 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:51.906 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:51.906 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:51.906 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:51.906 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:51.906 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:51.906 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:51.906 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:51.906 19:20:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:51.906 EAL: No free 2048 kB hugepages reported on node 1 00:29:55.182 Initializing NVMe Controllers 00:29:55.182 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:55.182 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:55.182 Initialization complete. Launching workers. 00:29:55.182 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 28801, failed: 0 00:29:55.182 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28801, failed to submit 0 00:29:55.182 success 0, unsuccess 28801, failed 0 00:29:55.182 19:20:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:55.182 19:20:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:55.182 EAL: No free 2048 kB hugepages reported on node 1 00:29:58.456 Initializing NVMe Controllers 00:29:58.456 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:58.456 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:58.456 Initialization complete. Launching workers. 00:29:58.456 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 58668, failed: 0 00:29:58.456 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14778, failed to submit 43890 00:29:58.456 success 0, unsuccess 14778, failed 0 00:29:58.456 19:20:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:58.456 19:20:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:58.456 EAL: No free 2048 kB hugepages reported on node 1 00:30:01.744 Initializing NVMe Controllers 00:30:01.744 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:01.744 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:01.744 Initialization complete. Launching workers. 00:30:01.744 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 59011, failed: 0 00:30:01.744 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14730, failed to submit 44281 00:30:01.744 success 0, unsuccess 14730, failed 0 00:30:01.744 19:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:01.744 19:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:01.744 19:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:30:01.744 19:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:01.744 19:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:01.744 19:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:01.744 19:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:01.744 19:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:30:01.744 19:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:30:01.744 19:20:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:02.679 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:30:02.679 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:30:02.679 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:30:02.679 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:30:02.679 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:30:02.679 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:30:02.679 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:30:02.937 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:30:02.937 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:30:02.937 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:30:02.937 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:30:02.937 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:30:02.937 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:30:02.937 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:30:02.937 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:30:02.937 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:30:03.874 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:30:03.874 00:30:03.874 real 0m14.797s 00:30:03.874 user 0m4.894s 00:30:03.874 sys 0m3.762s 00:30:03.874 19:20:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:03.874 19:20:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:03.874 ************************************ 00:30:03.874 END TEST kernel_target_abort 00:30:03.874 ************************************ 00:30:03.874 19:20:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:03.874 19:20:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:03.874 19:20:56 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:03.874 19:20:56 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:30:03.874 19:20:56 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:03.874 19:20:56 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:30:03.874 19:20:56 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:03.874 19:20:56 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:03.874 rmmod nvme_tcp 00:30:04.132 rmmod nvme_fabrics 00:30:04.132 rmmod nvme_keyring 00:30:04.132 19:20:56 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:04.132 19:20:56 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:30:04.132 19:20:56 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:30:04.132 19:20:56 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1041068 ']' 00:30:04.132 19:20:56 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1041068 00:30:04.132 19:20:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1041068 ']' 00:30:04.132 19:20:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1041068 00:30:04.132 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1041068) - No such process 00:30:04.132 19:20:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1041068 is not found' 00:30:04.132 Process with pid 1041068 is not found 00:30:04.132 19:20:56 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:04.132 19:20:56 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:05.506 Waiting for block devices as requested 00:30:05.506 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:30:05.506 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:30:05.506 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:30:05.506 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:30:05.506 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:30:05.764 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:30:05.764 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:30:05.764 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:30:05.764 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:30:06.022 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:30:06.022 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:30:06.281 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:30:06.281 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:30:06.281 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:30:06.281 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:30:06.539 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:30:06.539 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:30:06.539 19:20:58 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:06.539 19:20:58 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:06.539 19:20:58 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:06.539 19:20:58 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:06.539 19:20:58 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.539 19:20:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:06.539 19:20:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.074 19:21:01 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:09.074 00:30:09.074 real 0m40.165s 00:30:09.074 user 1m2.848s 00:30:09.074 sys 0m10.955s 00:30:09.074 19:21:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:09.074 19:21:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:09.074 ************************************ 00:30:09.074 END TEST nvmf_abort_qd_sizes 00:30:09.074 ************************************ 00:30:09.074 19:21:01 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:09.074 19:21:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:09.074 19:21:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:09.074 19:21:01 -- common/autotest_common.sh@10 -- # set +x 00:30:09.074 ************************************ 00:30:09.074 START TEST keyring_file 00:30:09.074 ************************************ 00:30:09.074 19:21:01 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:09.074 * Looking for test storage... 00:30:09.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:09.074 19:21:01 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:09.074 19:21:01 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:09.074 19:21:01 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:09.074 19:21:01 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.074 19:21:01 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.074 19:21:01 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.074 19:21:01 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.074 19:21:01 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.074 19:21:01 keyring_file -- paths/export.sh@5 -- # export PATH 00:30:09.074 19:21:01 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@47 -- # : 0 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:09.074 19:21:01 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:09.074 19:21:01 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:09.074 19:21:01 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:09.074 19:21:01 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:30:09.074 19:21:01 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:30:09.074 19:21:01 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:30:09.074 19:21:01 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:09.074 19:21:01 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:09.074 19:21:01 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:09.074 19:21:01 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:09.074 19:21:01 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:09.074 19:21:01 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:09.074 19:21:01 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.cv7Gk0tqKC 00:30:09.074 19:21:01 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:09.074 19:21:01 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:09.074 19:21:01 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.cv7Gk0tqKC 00:30:09.074 19:21:01 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.cv7Gk0tqKC 00:30:09.074 19:21:01 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.cv7Gk0tqKC 00:30:09.074 19:21:01 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:30:09.074 19:21:01 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:09.074 19:21:01 keyring_file -- keyring/common.sh@17 -- # name=key1 00:30:09.074 19:21:01 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:09.074 19:21:01 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:09.074 19:21:01 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:09.074 19:21:01 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NUkuSTk3IW 00:30:09.074 19:21:01 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:09.075 19:21:01 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:09.075 19:21:01 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:09.075 19:21:01 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:09.075 19:21:01 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:09.075 19:21:01 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:09.075 19:21:01 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:09.075 19:21:01 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NUkuSTk3IW 00:30:09.075 19:21:01 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NUkuSTk3IW 00:30:09.075 19:21:01 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.NUkuSTk3IW 00:30:09.075 19:21:01 keyring_file -- keyring/file.sh@30 -- # tgtpid=1047333 00:30:09.075 19:21:01 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:09.075 19:21:01 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1047333 00:30:09.075 19:21:01 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1047333 ']' 00:30:09.075 19:21:01 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.075 19:21:01 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:09.075 19:21:01 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.075 19:21:01 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:09.075 19:21:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:09.075 [2024-07-25 19:21:01.303336] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:09.075 [2024-07-25 19:21:01.303459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047333 ] 00:30:09.075 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.075 [2024-07-25 19:21:01.380514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.075 [2024-07-25 19:21:01.499267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.009 19:21:02 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:10.009 19:21:02 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:30:10.009 19:21:02 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:30:10.009 19:21:02 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.009 19:21:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:10.009 [2024-07-25 19:21:02.250613] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:10.009 null0 00:30:10.009 [2024-07-25 19:21:02.282663] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:10.009 [2024-07-25 19:21:02.283162] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:10.009 [2024-07-25 19:21:02.290661] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:10.009 19:21:02 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.009 19:21:02 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:10.009 19:21:02 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:30:10.009 19:21:02 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:10.009 19:21:02 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:10.009 19:21:02 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:10.009 19:21:02 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:10.009 19:21:02 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:10.009 19:21:02 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:10.009 19:21:02 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.009 19:21:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:10.010 [2024-07-25 19:21:02.302690] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:30:10.010 request: 00:30:10.010 { 00:30:10.010 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:30:10.010 "secure_channel": false, 00:30:10.010 "listen_address": { 00:30:10.010 "trtype": "tcp", 00:30:10.010 "traddr": "127.0.0.1", 00:30:10.010 "trsvcid": "4420" 00:30:10.010 }, 00:30:10.010 "method": "nvmf_subsystem_add_listener", 00:30:10.010 "req_id": 1 00:30:10.010 } 00:30:10.010 Got JSON-RPC error response 00:30:10.010 response: 00:30:10.010 { 00:30:10.010 "code": -32602, 00:30:10.010 "message": "Invalid parameters" 00:30:10.010 } 00:30:10.010 19:21:02 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:10.010 19:21:02 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:30:10.010 19:21:02 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:10.010 19:21:02 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:10.010 19:21:02 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:10.010 19:21:02 keyring_file -- keyring/file.sh@46 -- # bperfpid=1047503 00:30:10.010 19:21:02 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1047503 /var/tmp/bperf.sock 00:30:10.010 19:21:02 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1047503 ']' 00:30:10.010 19:21:02 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:10.010 19:21:02 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:30:10.010 19:21:02 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:10.010 19:21:02 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:10.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:10.010 19:21:02 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:10.010 19:21:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:10.010 [2024-07-25 19:21:02.352187] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:10.010 [2024-07-25 19:21:02.352257] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047503 ] 00:30:10.010 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.010 [2024-07-25 19:21:02.422783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.268 [2024-07-25 19:21:02.548486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.834 19:21:03 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:10.834 19:21:03 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:30:10.834 19:21:03 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cv7Gk0tqKC 00:30:10.834 19:21:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cv7Gk0tqKC 00:30:11.092 19:21:03 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.NUkuSTk3IW 00:30:11.092 19:21:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.NUkuSTk3IW 00:30:11.350 19:21:03 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:30:11.350 19:21:03 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:30:11.350 19:21:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:11.350 19:21:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:11.350 19:21:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:11.607 19:21:04 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.cv7Gk0tqKC == \/\t\m\p\/\t\m\p\.\c\v\7\G\k\0\t\q\K\C ]] 00:30:11.607 19:21:04 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:30:11.607 19:21:04 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:30:11.607 19:21:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:11.607 19:21:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:11.607 19:21:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:11.865 19:21:04 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.NUkuSTk3IW == \/\t\m\p\/\t\m\p\.\N\U\k\u\S\T\k\3\I\W ]] 00:30:11.865 19:21:04 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:30:11.865 19:21:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:11.865 19:21:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:11.865 19:21:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:11.865 19:21:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:11.865 19:21:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:12.123 19:21:04 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:30:12.123 19:21:04 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:30:12.123 19:21:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:12.123 19:21:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:12.123 19:21:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:12.123 19:21:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:12.123 19:21:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:12.381 19:21:04 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:30:12.381 19:21:04 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:12.381 19:21:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:12.638 [2024-07-25 19:21:05.027373] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:12.638 nvme0n1 00:30:12.896 19:21:05 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:30:12.896 19:21:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:12.896 19:21:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:12.896 19:21:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:12.896 19:21:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:12.896 19:21:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:13.154 19:21:05 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:30:13.154 19:21:05 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:30:13.154 19:21:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:13.154 19:21:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:13.154 19:21:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:13.154 19:21:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:13.154 19:21:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:13.412 19:21:05 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:30:13.412 19:21:05 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:13.412 Running I/O for 1 seconds... 00:30:14.347 00:30:14.347 Latency(us) 00:30:14.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.347 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:30:14.347 nvme0n1 : 1.02 4576.57 17.88 0.00 0.00 27732.80 4247.70 35729.26 00:30:14.348 =================================================================================================================== 00:30:14.348 Total : 4576.57 17.88 0.00 0.00 27732.80 4247.70 35729.26 00:30:14.348 0 00:30:14.348 19:21:06 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:14.348 19:21:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:14.634 19:21:07 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:30:14.634 19:21:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:14.634 19:21:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:14.634 19:21:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:14.634 19:21:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:14.634 19:21:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:14.892 19:21:07 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:30:14.892 19:21:07 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:30:14.892 19:21:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:14.892 19:21:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:14.892 19:21:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:14.892 19:21:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:14.892 19:21:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:15.150 19:21:07 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:30:15.150 19:21:07 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:15.150 19:21:07 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:30:15.150 19:21:07 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:15.150 19:21:07 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:30:15.150 19:21:07 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:15.150 19:21:07 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:30:15.150 19:21:07 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:15.150 19:21:07 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:15.150 19:21:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:15.408 [2024-07-25 19:21:07.773156] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:15.408 [2024-07-25 19:21:07.773410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9319a0 (107): Transport endpoint is not connected 00:30:15.408 [2024-07-25 19:21:07.774402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9319a0 (9): Bad file descriptor 00:30:15.408 [2024-07-25 19:21:07.775400] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:15.408 [2024-07-25 19:21:07.775436] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:15.408 [2024-07-25 19:21:07.775451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:15.408 request: 00:30:15.408 { 00:30:15.408 "name": "nvme0", 00:30:15.408 "trtype": "tcp", 00:30:15.408 "traddr": "127.0.0.1", 00:30:15.408 "adrfam": "ipv4", 00:30:15.408 "trsvcid": "4420", 00:30:15.408 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:15.408 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:15.408 "prchk_reftag": false, 00:30:15.408 "prchk_guard": false, 00:30:15.408 "hdgst": false, 00:30:15.408 "ddgst": false, 00:30:15.408 "psk": "key1", 00:30:15.408 "method": "bdev_nvme_attach_controller", 00:30:15.408 "req_id": 1 00:30:15.408 } 00:30:15.408 Got JSON-RPC error response 00:30:15.408 response: 00:30:15.408 { 00:30:15.408 "code": -5, 00:30:15.408 "message": "Input/output error" 00:30:15.408 } 00:30:15.408 19:21:07 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:30:15.408 19:21:07 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:15.408 19:21:07 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:15.408 19:21:07 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:15.408 19:21:07 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:30:15.408 19:21:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:15.408 19:21:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:15.408 19:21:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:15.408 19:21:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:15.408 19:21:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:15.666 19:21:08 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:30:15.666 19:21:08 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:30:15.666 19:21:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:15.666 19:21:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:15.666 19:21:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:15.666 19:21:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:15.666 19:21:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:15.924 19:21:08 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:30:15.924 19:21:08 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:30:15.924 19:21:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:16.181 19:21:08 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:30:16.181 19:21:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:30:16.439 19:21:08 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:30:16.439 19:21:08 keyring_file -- keyring/file.sh@77 -- # jq length 00:30:16.439 19:21:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:16.697 19:21:09 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:30:16.697 19:21:09 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.cv7Gk0tqKC 00:30:16.697 19:21:09 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.cv7Gk0tqKC 00:30:16.697 19:21:09 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:30:16.697 19:21:09 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.cv7Gk0tqKC 00:30:16.697 19:21:09 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:30:16.697 19:21:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:16.697 19:21:09 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:30:16.697 19:21:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:16.697 19:21:09 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cv7Gk0tqKC 00:30:16.697 19:21:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cv7Gk0tqKC 00:30:16.955 [2024-07-25 19:21:09.328412] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.cv7Gk0tqKC': 0100660 00:30:16.955 [2024-07-25 19:21:09.328474] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:30:16.955 request: 00:30:16.955 { 00:30:16.955 "name": "key0", 00:30:16.955 "path": "/tmp/tmp.cv7Gk0tqKC", 00:30:16.955 "method": "keyring_file_add_key", 00:30:16.955 "req_id": 1 00:30:16.955 } 00:30:16.955 Got JSON-RPC error response 00:30:16.955 response: 00:30:16.955 { 00:30:16.955 "code": -1, 00:30:16.955 "message": "Operation not permitted" 00:30:16.955 } 00:30:16.955 19:21:09 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:30:16.955 19:21:09 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:16.955 19:21:09 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:16.955 19:21:09 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:16.955 19:21:09 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.cv7Gk0tqKC 00:30:16.955 19:21:09 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cv7Gk0tqKC 00:30:16.955 19:21:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cv7Gk0tqKC 00:30:17.214 19:21:09 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.cv7Gk0tqKC 00:30:17.214 19:21:09 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:30:17.214 19:21:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:17.214 19:21:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:17.214 19:21:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:17.214 19:21:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:17.214 19:21:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:17.472 19:21:09 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:30:17.472 19:21:09 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:17.472 19:21:09 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:30:17.472 19:21:09 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:17.472 19:21:09 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:30:17.472 19:21:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:17.472 19:21:09 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:30:17.472 19:21:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:17.472 19:21:09 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:17.472 19:21:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:17.730 [2024-07-25 19:21:10.078485] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.cv7Gk0tqKC': No such file or directory 00:30:17.730 [2024-07-25 19:21:10.078534] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:30:17.730 [2024-07-25 19:21:10.078565] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:30:17.730 [2024-07-25 19:21:10.078578] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:17.730 [2024-07-25 19:21:10.078591] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:30:17.730 request: 00:30:17.730 { 00:30:17.730 "name": "nvme0", 00:30:17.730 "trtype": "tcp", 00:30:17.730 "traddr": "127.0.0.1", 00:30:17.730 "adrfam": "ipv4", 00:30:17.730 "trsvcid": "4420", 00:30:17.730 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:17.730 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:17.730 "prchk_reftag": false, 00:30:17.730 "prchk_guard": false, 00:30:17.730 "hdgst": false, 00:30:17.730 "ddgst": false, 00:30:17.730 "psk": "key0", 00:30:17.730 "method": "bdev_nvme_attach_controller", 00:30:17.730 "req_id": 1 00:30:17.730 } 00:30:17.730 Got JSON-RPC error response 00:30:17.730 response: 00:30:17.730 { 00:30:17.730 "code": -19, 00:30:17.730 "message": "No such device" 00:30:17.730 } 00:30:17.730 19:21:10 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:30:17.730 19:21:10 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:17.730 19:21:10 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:17.730 19:21:10 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:17.730 19:21:10 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:30:17.730 19:21:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:17.988 19:21:10 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:17.988 19:21:10 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:17.988 19:21:10 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:17.988 19:21:10 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:17.988 19:21:10 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:17.988 19:21:10 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:17.988 19:21:10 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.QWfd4GO18N 00:30:17.988 19:21:10 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:17.988 19:21:10 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:17.988 19:21:10 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:17.988 19:21:10 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:17.988 19:21:10 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:17.988 19:21:10 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:17.988 19:21:10 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:17.988 19:21:10 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.QWfd4GO18N 00:30:17.988 19:21:10 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.QWfd4GO18N 00:30:17.988 19:21:10 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.QWfd4GO18N 00:30:17.988 19:21:10 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QWfd4GO18N 00:30:17.988 19:21:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QWfd4GO18N 00:30:18.246 19:21:10 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:18.246 19:21:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:18.504 nvme0n1 00:30:18.504 19:21:10 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:30:18.504 19:21:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:18.504 19:21:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:18.504 19:21:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:18.504 19:21:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:18.504 19:21:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:18.762 19:21:11 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:30:18.762 19:21:11 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:30:18.762 19:21:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:19.020 19:21:11 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:30:19.020 19:21:11 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:30:19.020 19:21:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:19.020 19:21:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:19.020 19:21:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:19.278 19:21:11 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:30:19.278 19:21:11 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:30:19.278 19:21:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:19.278 19:21:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:19.278 19:21:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:19.278 19:21:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:19.278 19:21:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:19.536 19:21:11 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:30:19.536 19:21:11 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:19.536 19:21:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:19.794 19:21:12 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:30:19.794 19:21:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:19.794 19:21:12 keyring_file -- keyring/file.sh@104 -- # jq length 00:30:20.051 19:21:12 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:30:20.051 19:21:12 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QWfd4GO18N 00:30:20.051 19:21:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QWfd4GO18N 00:30:20.309 19:21:12 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.NUkuSTk3IW 00:30:20.309 19:21:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.NUkuSTk3IW 00:30:20.567 19:21:12 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:20.567 19:21:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:20.824 nvme0n1 00:30:20.824 19:21:13 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:30:20.824 19:21:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:30:21.082 19:21:13 keyring_file -- keyring/file.sh@112 -- # config='{ 00:30:21.082 "subsystems": [ 00:30:21.082 { 00:30:21.082 "subsystem": "keyring", 00:30:21.082 "config": [ 00:30:21.082 { 00:30:21.082 "method": "keyring_file_add_key", 00:30:21.082 "params": { 00:30:21.082 "name": "key0", 00:30:21.082 "path": "/tmp/tmp.QWfd4GO18N" 00:30:21.082 } 00:30:21.082 }, 00:30:21.082 { 00:30:21.082 "method": "keyring_file_add_key", 00:30:21.082 "params": { 00:30:21.082 "name": "key1", 00:30:21.082 "path": "/tmp/tmp.NUkuSTk3IW" 00:30:21.082 } 00:30:21.082 } 00:30:21.082 ] 00:30:21.082 }, 00:30:21.082 { 00:30:21.082 "subsystem": "iobuf", 00:30:21.082 "config": [ 00:30:21.082 { 00:30:21.082 "method": "iobuf_set_options", 00:30:21.082 "params": { 00:30:21.082 "small_pool_count": 8192, 00:30:21.082 "large_pool_count": 1024, 00:30:21.082 "small_bufsize": 8192, 00:30:21.082 "large_bufsize": 135168 00:30:21.082 } 00:30:21.082 } 00:30:21.082 ] 00:30:21.082 }, 00:30:21.082 { 00:30:21.082 "subsystem": "sock", 00:30:21.082 "config": [ 00:30:21.082 { 00:30:21.082 "method": "sock_set_default_impl", 00:30:21.082 "params": { 00:30:21.082 "impl_name": "posix" 00:30:21.082 } 00:30:21.082 }, 00:30:21.082 { 00:30:21.082 "method": "sock_impl_set_options", 00:30:21.082 "params": { 00:30:21.082 "impl_name": "ssl", 00:30:21.082 "recv_buf_size": 4096, 00:30:21.082 "send_buf_size": 4096, 00:30:21.082 "enable_recv_pipe": true, 00:30:21.082 "enable_quickack": false, 00:30:21.082 "enable_placement_id": 0, 00:30:21.082 "enable_zerocopy_send_server": true, 00:30:21.082 "enable_zerocopy_send_client": false, 00:30:21.082 "zerocopy_threshold": 0, 00:30:21.082 "tls_version": 0, 00:30:21.082 "enable_ktls": false 00:30:21.082 } 00:30:21.082 }, 00:30:21.082 { 00:30:21.082 "method": "sock_impl_set_options", 00:30:21.082 "params": { 00:30:21.082 "impl_name": "posix", 00:30:21.082 "recv_buf_size": 2097152, 00:30:21.082 "send_buf_size": 2097152, 00:30:21.082 "enable_recv_pipe": true, 00:30:21.082 "enable_quickack": false, 00:30:21.082 "enable_placement_id": 0, 00:30:21.082 "enable_zerocopy_send_server": true, 00:30:21.082 "enable_zerocopy_send_client": false, 00:30:21.082 "zerocopy_threshold": 0, 00:30:21.082 "tls_version": 0, 00:30:21.082 "enable_ktls": false 00:30:21.082 } 00:30:21.082 } 00:30:21.082 ] 00:30:21.082 }, 00:30:21.082 { 00:30:21.082 "subsystem": "vmd", 00:30:21.082 "config": [] 00:30:21.082 }, 00:30:21.082 { 00:30:21.082 "subsystem": "accel", 00:30:21.082 "config": [ 00:30:21.082 { 00:30:21.082 "method": "accel_set_options", 00:30:21.082 "params": { 00:30:21.082 "small_cache_size": 128, 00:30:21.082 "large_cache_size": 16, 00:30:21.082 "task_count": 2048, 00:30:21.082 "sequence_count": 2048, 00:30:21.082 "buf_count": 2048 00:30:21.082 } 00:30:21.082 } 00:30:21.082 ] 00:30:21.082 }, 00:30:21.082 { 00:30:21.082 "subsystem": "bdev", 00:30:21.082 "config": [ 00:30:21.082 { 00:30:21.082 "method": "bdev_set_options", 00:30:21.082 "params": { 00:30:21.082 "bdev_io_pool_size": 65535, 00:30:21.082 "bdev_io_cache_size": 256, 00:30:21.082 "bdev_auto_examine": true, 00:30:21.082 "iobuf_small_cache_size": 128, 00:30:21.082 "iobuf_large_cache_size": 16 00:30:21.082 } 00:30:21.082 }, 00:30:21.082 { 00:30:21.082 "method": "bdev_raid_set_options", 00:30:21.082 "params": { 00:30:21.082 "process_window_size_kb": 1024, 00:30:21.082 "process_max_bandwidth_mb_sec": 0 00:30:21.082 } 00:30:21.082 }, 00:30:21.082 { 00:30:21.082 "method": "bdev_iscsi_set_options", 00:30:21.082 "params": { 00:30:21.082 "timeout_sec": 30 00:30:21.082 } 00:30:21.082 }, 00:30:21.082 { 00:30:21.082 "method": "bdev_nvme_set_options", 00:30:21.082 "params": { 00:30:21.082 "action_on_timeout": "none", 00:30:21.082 "timeout_us": 0, 00:30:21.082 "timeout_admin_us": 0, 00:30:21.082 "keep_alive_timeout_ms": 10000, 00:30:21.082 "arbitration_burst": 0, 00:30:21.082 "low_priority_weight": 0, 00:30:21.082 "medium_priority_weight": 0, 00:30:21.082 "high_priority_weight": 0, 00:30:21.082 "nvme_adminq_poll_period_us": 10000, 00:30:21.082 "nvme_ioq_poll_period_us": 0, 00:30:21.082 "io_queue_requests": 512, 00:30:21.082 "delay_cmd_submit": true, 00:30:21.083 "transport_retry_count": 4, 00:30:21.083 "bdev_retry_count": 3, 00:30:21.083 "transport_ack_timeout": 0, 00:30:21.083 "ctrlr_loss_timeout_sec": 0, 00:30:21.083 "reconnect_delay_sec": 0, 00:30:21.083 "fast_io_fail_timeout_sec": 0, 00:30:21.083 "disable_auto_failback": false, 00:30:21.083 "generate_uuids": false, 00:30:21.083 "transport_tos": 0, 00:30:21.083 "nvme_error_stat": false, 00:30:21.083 "rdma_srq_size": 0, 00:30:21.083 "io_path_stat": false, 00:30:21.083 "allow_accel_sequence": false, 00:30:21.083 "rdma_max_cq_size": 0, 00:30:21.083 "rdma_cm_event_timeout_ms": 0, 00:30:21.083 "dhchap_digests": [ 00:30:21.083 "sha256", 00:30:21.083 "sha384", 00:30:21.083 "sha512" 00:30:21.083 ], 00:30:21.083 "dhchap_dhgroups": [ 00:30:21.083 "null", 00:30:21.083 "ffdhe2048", 00:30:21.083 "ffdhe3072", 00:30:21.083 "ffdhe4096", 00:30:21.083 "ffdhe6144", 00:30:21.083 "ffdhe8192" 00:30:21.083 ] 00:30:21.083 } 00:30:21.083 }, 00:30:21.083 { 00:30:21.083 "method": "bdev_nvme_attach_controller", 00:30:21.083 "params": { 00:30:21.083 "name": "nvme0", 00:30:21.083 "trtype": "TCP", 00:30:21.083 "adrfam": "IPv4", 00:30:21.083 "traddr": "127.0.0.1", 00:30:21.083 "trsvcid": "4420", 00:30:21.083 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:21.083 "prchk_reftag": false, 00:30:21.083 "prchk_guard": false, 00:30:21.083 "ctrlr_loss_timeout_sec": 0, 00:30:21.083 "reconnect_delay_sec": 0, 00:30:21.083 "fast_io_fail_timeout_sec": 0, 00:30:21.083 "psk": "key0", 00:30:21.083 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:21.083 "hdgst": false, 00:30:21.083 "ddgst": false 00:30:21.083 } 00:30:21.083 }, 00:30:21.083 { 00:30:21.083 "method": "bdev_nvme_set_hotplug", 00:30:21.083 "params": { 00:30:21.083 "period_us": 100000, 00:30:21.083 "enable": false 00:30:21.083 } 00:30:21.083 }, 00:30:21.083 { 00:30:21.083 "method": "bdev_wait_for_examine" 00:30:21.083 } 00:30:21.083 ] 00:30:21.083 }, 00:30:21.083 { 00:30:21.083 "subsystem": "nbd", 00:30:21.083 "config": [] 00:30:21.083 } 00:30:21.083 ] 00:30:21.083 }' 00:30:21.083 19:21:13 keyring_file -- keyring/file.sh@114 -- # killprocess 1047503 00:30:21.083 19:21:13 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1047503 ']' 00:30:21.083 19:21:13 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1047503 00:30:21.083 19:21:13 keyring_file -- common/autotest_common.sh@955 -- # uname 00:30:21.083 19:21:13 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:21.340 19:21:13 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1047503 00:30:21.341 19:21:13 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:21.341 19:21:13 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:21.341 19:21:13 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1047503' 00:30:21.341 killing process with pid 1047503 00:30:21.341 19:21:13 keyring_file -- common/autotest_common.sh@969 -- # kill 1047503 00:30:21.341 Received shutdown signal, test time was about 1.000000 seconds 00:30:21.341 00:30:21.341 Latency(us) 00:30:21.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.341 =================================================================================================================== 00:30:21.341 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:21.341 19:21:13 keyring_file -- common/autotest_common.sh@974 -- # wait 1047503 00:30:21.599 19:21:13 keyring_file -- keyring/file.sh@117 -- # bperfpid=1049474 00:30:21.599 19:21:13 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1049474 /var/tmp/bperf.sock 00:30:21.599 19:21:13 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1049474 ']' 00:30:21.599 19:21:13 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:21.599 19:21:13 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:30:21.599 19:21:13 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:21.599 19:21:13 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:21.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:21.599 19:21:13 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:30:21.599 "subsystems": [ 00:30:21.599 { 00:30:21.599 "subsystem": "keyring", 00:30:21.599 "config": [ 00:30:21.599 { 00:30:21.599 "method": "keyring_file_add_key", 00:30:21.599 "params": { 00:30:21.599 "name": "key0", 00:30:21.599 "path": "/tmp/tmp.QWfd4GO18N" 00:30:21.599 } 00:30:21.599 }, 00:30:21.599 { 00:30:21.599 "method": "keyring_file_add_key", 00:30:21.599 "params": { 00:30:21.599 "name": "key1", 00:30:21.599 "path": "/tmp/tmp.NUkuSTk3IW" 00:30:21.599 } 00:30:21.599 } 00:30:21.599 ] 00:30:21.599 }, 00:30:21.599 { 00:30:21.599 "subsystem": "iobuf", 00:30:21.599 "config": [ 00:30:21.599 { 00:30:21.599 "method": "iobuf_set_options", 00:30:21.599 "params": { 00:30:21.599 "small_pool_count": 8192, 00:30:21.599 "large_pool_count": 1024, 00:30:21.599 "small_bufsize": 8192, 00:30:21.599 "large_bufsize": 135168 00:30:21.599 } 00:30:21.599 } 00:30:21.599 ] 00:30:21.599 }, 00:30:21.599 { 00:30:21.599 "subsystem": "sock", 00:30:21.599 "config": [ 00:30:21.599 { 00:30:21.599 "method": "sock_set_default_impl", 00:30:21.599 "params": { 00:30:21.599 "impl_name": "posix" 00:30:21.599 } 00:30:21.599 }, 00:30:21.599 { 00:30:21.599 "method": "sock_impl_set_options", 00:30:21.599 "params": { 00:30:21.599 "impl_name": "ssl", 00:30:21.599 "recv_buf_size": 4096, 00:30:21.599 "send_buf_size": 4096, 00:30:21.599 "enable_recv_pipe": true, 00:30:21.599 "enable_quickack": false, 00:30:21.599 "enable_placement_id": 0, 00:30:21.599 "enable_zerocopy_send_server": true, 00:30:21.599 "enable_zerocopy_send_client": false, 00:30:21.599 "zerocopy_threshold": 0, 00:30:21.599 "tls_version": 0, 00:30:21.599 "enable_ktls": false 00:30:21.599 } 00:30:21.599 }, 00:30:21.599 { 00:30:21.599 "method": "sock_impl_set_options", 00:30:21.599 "params": { 00:30:21.599 "impl_name": "posix", 00:30:21.599 "recv_buf_size": 2097152, 00:30:21.599 "send_buf_size": 2097152, 00:30:21.599 "enable_recv_pipe": true, 00:30:21.599 "enable_quickack": false, 00:30:21.599 "enable_placement_id": 0, 00:30:21.599 "enable_zerocopy_send_server": true, 00:30:21.599 "enable_zerocopy_send_client": false, 00:30:21.599 "zerocopy_threshold": 0, 00:30:21.599 "tls_version": 0, 00:30:21.599 "enable_ktls": false 00:30:21.599 } 00:30:21.599 } 00:30:21.599 ] 00:30:21.599 }, 00:30:21.599 { 00:30:21.599 "subsystem": "vmd", 00:30:21.599 "config": [] 00:30:21.599 }, 00:30:21.599 { 00:30:21.599 "subsystem": "accel", 00:30:21.599 "config": [ 00:30:21.599 { 00:30:21.599 "method": "accel_set_options", 00:30:21.599 "params": { 00:30:21.599 "small_cache_size": 128, 00:30:21.599 "large_cache_size": 16, 00:30:21.599 "task_count": 2048, 00:30:21.599 "sequence_count": 2048, 00:30:21.599 "buf_count": 2048 00:30:21.599 } 00:30:21.599 } 00:30:21.599 ] 00:30:21.599 }, 00:30:21.599 { 00:30:21.599 "subsystem": "bdev", 00:30:21.599 "config": [ 00:30:21.599 { 00:30:21.599 "method": "bdev_set_options", 00:30:21.599 "params": { 00:30:21.599 "bdev_io_pool_size": 65535, 00:30:21.599 "bdev_io_cache_size": 256, 00:30:21.599 "bdev_auto_examine": true, 00:30:21.599 "iobuf_small_cache_size": 128, 00:30:21.599 "iobuf_large_cache_size": 16 00:30:21.599 } 00:30:21.599 }, 00:30:21.599 { 00:30:21.599 "method": "bdev_raid_set_options", 00:30:21.599 "params": { 00:30:21.599 "process_window_size_kb": 1024, 00:30:21.599 "process_max_bandwidth_mb_sec": 0 00:30:21.599 } 00:30:21.599 }, 00:30:21.599 { 00:30:21.599 "method": "bdev_iscsi_set_options", 00:30:21.599 "params": { 00:30:21.599 "timeout_sec": 30 00:30:21.599 } 00:30:21.599 }, 00:30:21.599 { 00:30:21.599 "method": "bdev_nvme_set_options", 00:30:21.599 "params": { 00:30:21.599 "action_on_timeout": "none", 00:30:21.599 "timeout_us": 0, 00:30:21.599 "timeout_admin_us": 0, 00:30:21.599 "keep_alive_timeout_ms": 10000, 00:30:21.599 "arbitration_burst": 0, 00:30:21.599 "low_priority_weight": 0, 00:30:21.599 "medium_priority_weight": 0, 00:30:21.599 "high_priority_weight": 0, 00:30:21.599 "nvme_adminq_poll_period_us": 10000, 00:30:21.599 "nvme_ioq_poll_period_us": 0, 00:30:21.599 "io_queue_requests": 512, 00:30:21.599 "delay_cmd_submit": true, 00:30:21.599 "transport_retry_count": 4, 00:30:21.599 "bdev_retry_count": 3, 00:30:21.599 "transport_ack_timeout": 0, 00:30:21.599 "ctrlr_loss_timeout_sec": 0, 00:30:21.599 "reconnect_delay_sec": 0, 00:30:21.599 "fast_io_fail_timeout_sec": 0, 00:30:21.599 "disable_auto_failback": false, 00:30:21.599 "generate_uuids": false, 00:30:21.599 "transport_tos": 0, 00:30:21.599 "nvme_error_stat": false, 00:30:21.599 "rdma_srq_size": 0, 00:30:21.599 "io_path_stat": false, 00:30:21.599 "allow_accel_sequence": false, 00:30:21.599 "rdma_max_cq_size": 0, 00:30:21.599 "rdma_cm_event_timeout_ms": 0, 00:30:21.599 "dhchap_digests": [ 00:30:21.599 "sha256", 00:30:21.599 "sha384", 00:30:21.599 "sha512" 00:30:21.599 ], 00:30:21.599 "dhchap_dhgroups": [ 00:30:21.599 "null", 00:30:21.599 "ffdhe2048", 00:30:21.599 "ffdhe3072", 00:30:21.599 "ffdhe4096", 00:30:21.599 "ffdhe6144", 00:30:21.599 "ffdhe8192" 00:30:21.599 ] 00:30:21.599 } 00:30:21.599 }, 00:30:21.599 { 00:30:21.599 "method": "bdev_nvme_attach_controller", 00:30:21.600 "params": { 00:30:21.600 "name": "nvme0", 00:30:21.600 "trtype": "TCP", 00:30:21.600 "adrfam": "IPv4", 00:30:21.600 "traddr": "127.0.0.1", 00:30:21.600 "trsvcid": "4420", 00:30:21.600 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:21.600 "prchk_reftag": false, 00:30:21.600 "prchk_guard": false, 00:30:21.600 "ctrlr_loss_timeout_sec": 0, 00:30:21.600 "reconnect_delay_sec": 0, 00:30:21.600 "fast_io_fail_timeout_sec": 0, 00:30:21.600 "psk": "key0", 00:30:21.600 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:21.600 "hdgst": false, 00:30:21.600 "ddgst": false 00:30:21.600 } 00:30:21.600 }, 00:30:21.600 { 00:30:21.600 "method": "bdev_nvme_set_hotplug", 00:30:21.600 "params": { 00:30:21.600 "period_us": 100000, 00:30:21.600 "enable": false 00:30:21.600 } 00:30:21.600 }, 00:30:21.600 { 00:30:21.600 "method": "bdev_wait_for_examine" 00:30:21.600 } 00:30:21.600 ] 00:30:21.600 }, 00:30:21.600 { 00:30:21.600 "subsystem": "nbd", 00:30:21.600 "config": [] 00:30:21.600 } 00:30:21.600 ] 00:30:21.600 }' 00:30:21.600 19:21:13 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:21.600 19:21:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:21.600 [2024-07-25 19:21:13.903785] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:21.600 [2024-07-25 19:21:13.903868] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1049474 ] 00:30:21.600 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.600 [2024-07-25 19:21:13.972895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.858 [2024-07-25 19:21:14.089936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.858 [2024-07-25 19:21:14.277078] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:22.424 19:21:14 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:22.424 19:21:14 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:30:22.424 19:21:14 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:30:22.424 19:21:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:22.424 19:21:14 keyring_file -- keyring/file.sh@120 -- # jq length 00:30:22.682 19:21:15 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:30:22.682 19:21:15 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:30:22.682 19:21:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:22.682 19:21:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:22.682 19:21:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:22.682 19:21:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:22.682 19:21:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:22.940 19:21:15 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:30:22.940 19:21:15 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:30:22.940 19:21:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:22.940 19:21:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:22.940 19:21:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:22.940 19:21:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:22.940 19:21:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:23.197 19:21:15 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:30:23.197 19:21:15 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:30:23.197 19:21:15 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:30:23.197 19:21:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:30:23.456 19:21:15 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:30:23.456 19:21:15 keyring_file -- keyring/file.sh@1 -- # cleanup 00:30:23.456 19:21:15 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.QWfd4GO18N /tmp/tmp.NUkuSTk3IW 00:30:23.456 19:21:15 keyring_file -- keyring/file.sh@20 -- # killprocess 1049474 00:30:23.456 19:21:15 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1049474 ']' 00:30:23.456 19:21:15 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1049474 00:30:23.456 19:21:15 keyring_file -- common/autotest_common.sh@955 -- # uname 00:30:23.456 19:21:15 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:23.456 19:21:15 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1049474 00:30:23.456 19:21:15 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:23.456 19:21:15 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:23.456 19:21:15 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1049474' 00:30:23.456 killing process with pid 1049474 00:30:23.456 19:21:15 keyring_file -- common/autotest_common.sh@969 -- # kill 1049474 00:30:23.456 Received shutdown signal, test time was about 1.000000 seconds 00:30:23.456 00:30:23.456 Latency(us) 00:30:23.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.456 =================================================================================================================== 00:30:23.456 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:23.456 19:21:15 keyring_file -- common/autotest_common.sh@974 -- # wait 1049474 00:30:23.714 19:21:16 keyring_file -- keyring/file.sh@21 -- # killprocess 1047333 00:30:23.714 19:21:16 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1047333 ']' 00:30:23.714 19:21:16 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1047333 00:30:23.714 19:21:16 keyring_file -- common/autotest_common.sh@955 -- # uname 00:30:23.714 19:21:16 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:23.714 19:21:16 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1047333 00:30:23.714 19:21:16 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:23.714 19:21:16 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:23.714 19:21:16 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1047333' 00:30:23.714 killing process with pid 1047333 00:30:23.714 19:21:16 keyring_file -- common/autotest_common.sh@969 -- # kill 1047333 00:30:23.714 [2024-07-25 19:21:16.184737] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:23.714 19:21:16 keyring_file -- common/autotest_common.sh@974 -- # wait 1047333 00:30:24.281 00:30:24.281 real 0m15.542s 00:30:24.281 user 0m37.448s 00:30:24.281 sys 0m3.363s 00:30:24.281 19:21:16 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:24.281 19:21:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:24.281 ************************************ 00:30:24.281 END TEST keyring_file 00:30:24.281 ************************************ 00:30:24.281 19:21:16 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:30:24.281 19:21:16 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:24.281 19:21:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:24.281 19:21:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:24.281 19:21:16 -- common/autotest_common.sh@10 -- # set +x 00:30:24.281 ************************************ 00:30:24.281 START TEST keyring_linux 00:30:24.281 ************************************ 00:30:24.281 19:21:16 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:24.281 * Looking for test storage... 00:30:24.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:24.281 19:21:16 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:24.281 19:21:16 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:24.281 19:21:16 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:24.281 19:21:16 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:24.281 19:21:16 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:24.281 19:21:16 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.281 19:21:16 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.281 19:21:16 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.281 19:21:16 keyring_linux -- paths/export.sh@5 -- # export PATH 00:30:24.281 19:21:16 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:24.281 19:21:16 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:24.281 19:21:16 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:24.281 19:21:16 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:24.281 19:21:16 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:30:24.281 19:21:16 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:30:24.281 19:21:16 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:30:24.281 19:21:16 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:30:24.281 19:21:16 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:24.281 19:21:16 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:30:24.281 19:21:16 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:24.281 19:21:16 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:24.281 19:21:16 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:30:24.281 19:21:16 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:24.281 19:21:16 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:24.540 19:21:16 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:30:24.540 19:21:16 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:30:24.540 /tmp/:spdk-test:key0 00:30:24.540 19:21:16 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:30:24.540 19:21:16 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:24.540 19:21:16 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:30:24.540 19:21:16 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:24.540 19:21:16 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:24.540 19:21:16 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:30:24.540 19:21:16 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:24.540 19:21:16 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:24.540 19:21:16 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:24.540 19:21:16 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:24.540 19:21:16 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:24.540 19:21:16 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:24.540 19:21:16 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:24.540 19:21:16 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:30:24.540 19:21:16 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:30:24.540 /tmp/:spdk-test:key1 00:30:24.540 19:21:16 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1049959 00:30:24.540 19:21:16 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:24.540 19:21:16 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1049959 00:30:24.540 19:21:16 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1049959 ']' 00:30:24.540 19:21:16 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.540 19:21:16 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:24.540 19:21:16 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.540 19:21:16 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:24.540 19:21:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:24.540 [2024-07-25 19:21:16.856460] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:24.540 [2024-07-25 19:21:16.856548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1049959 ] 00:30:24.540 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.540 [2024-07-25 19:21:16.927934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.798 [2024-07-25 19:21:17.046946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.362 19:21:17 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:25.362 19:21:17 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:30:25.362 19:21:17 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:30:25.362 19:21:17 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.362 19:21:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:25.362 [2024-07-25 19:21:17.814505] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:25.620 null0 00:30:25.621 [2024-07-25 19:21:17.846553] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:25.621 [2024-07-25 19:21:17.847033] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:25.621 19:21:17 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.621 19:21:17 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:30:25.621 549152157 00:30:25.621 19:21:17 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:30:25.621 580056247 00:30:25.621 19:21:17 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1050097 00:30:25.621 19:21:17 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:30:25.621 19:21:17 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1050097 /var/tmp/bperf.sock 00:30:25.621 19:21:17 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1050097 ']' 00:30:25.621 19:21:17 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:25.621 19:21:17 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:25.621 19:21:17 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:25.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:25.621 19:21:17 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:25.621 19:21:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:25.621 [2024-07-25 19:21:17.912734] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:25.621 [2024-07-25 19:21:17.912810] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1050097 ] 00:30:25.621 EAL: No free 2048 kB hugepages reported on node 1 00:30:25.621 [2024-07-25 19:21:17.982703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.878 [2024-07-25 19:21:18.101802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.444 19:21:18 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:26.444 19:21:18 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:30:26.444 19:21:18 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:30:26.444 19:21:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:30:26.701 19:21:19 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:30:26.701 19:21:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:27.267 19:21:19 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:27.268 19:21:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:27.268 [2024-07-25 19:21:19.667258] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:27.268 nvme0n1 00:30:27.526 19:21:19 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:30:27.526 19:21:19 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:30:27.526 19:21:19 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:27.526 19:21:19 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:27.526 19:21:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:27.526 19:21:19 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:27.784 19:21:20 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:30:27.784 19:21:20 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:27.784 19:21:20 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:30:27.784 19:21:20 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:30:27.784 19:21:20 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:27.784 19:21:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:27.784 19:21:20 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:30:28.042 19:21:20 keyring_linux -- keyring/linux.sh@25 -- # sn=549152157 00:30:28.042 19:21:20 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:30:28.042 19:21:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:28.042 19:21:20 keyring_linux -- keyring/linux.sh@26 -- # [[ 549152157 == \5\4\9\1\5\2\1\5\7 ]] 00:30:28.042 19:21:20 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 549152157 00:30:28.042 19:21:20 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:30:28.042 19:21:20 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:28.042 Running I/O for 1 seconds... 00:30:28.976 00:30:28.976 Latency(us) 00:30:28.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:28.976 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:28.976 nvme0n1 : 1.03 3978.80 15.54 0.00 0.00 31785.24 6213.78 38447.79 00:30:28.976 =================================================================================================================== 00:30:28.976 Total : 3978.80 15.54 0.00 0.00 31785.24 6213.78 38447.79 00:30:28.976 0 00:30:28.976 19:21:21 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:28.976 19:21:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:29.234 19:21:21 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:30:29.234 19:21:21 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:30:29.234 19:21:21 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:29.234 19:21:21 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:29.234 19:21:21 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:29.234 19:21:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:29.493 19:21:21 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:30:29.493 19:21:21 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:29.493 19:21:21 keyring_linux -- keyring/linux.sh@23 -- # return 00:30:29.493 19:21:21 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:29.493 19:21:21 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:30:29.493 19:21:21 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:29.493 19:21:21 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:30:29.493 19:21:21 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:29.493 19:21:21 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:30:29.493 19:21:21 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:29.493 19:21:21 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:29.493 19:21:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:29.750 [2024-07-25 19:21:22.159898] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:29.750 [2024-07-25 19:21:22.160189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193ded0 (107): Transport endpoint is not connected 00:30:29.750 [2024-07-25 19:21:22.161183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193ded0 (9): Bad file descriptor 00:30:29.750 [2024-07-25 19:21:22.162181] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:29.750 [2024-07-25 19:21:22.162201] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:29.750 [2024-07-25 19:21:22.162214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:29.750 request: 00:30:29.750 { 00:30:29.750 "name": "nvme0", 00:30:29.750 "trtype": "tcp", 00:30:29.750 "traddr": "127.0.0.1", 00:30:29.750 "adrfam": "ipv4", 00:30:29.750 "trsvcid": "4420", 00:30:29.750 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:29.750 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:29.750 "prchk_reftag": false, 00:30:29.750 "prchk_guard": false, 00:30:29.750 "hdgst": false, 00:30:29.750 "ddgst": false, 00:30:29.750 "psk": ":spdk-test:key1", 00:30:29.750 "method": "bdev_nvme_attach_controller", 00:30:29.750 "req_id": 1 00:30:29.750 } 00:30:29.750 Got JSON-RPC error response 00:30:29.750 response: 00:30:29.750 { 00:30:29.750 "code": -5, 00:30:29.750 "message": "Input/output error" 00:30:29.750 } 00:30:29.750 19:21:22 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:30:29.750 19:21:22 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:29.750 19:21:22 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:29.750 19:21:22 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:29.750 19:21:22 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:30:29.750 19:21:22 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:29.750 19:21:22 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:30:29.750 19:21:22 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:30:29.750 19:21:22 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:30:29.750 19:21:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:29.750 19:21:22 keyring_linux -- keyring/linux.sh@33 -- # sn=549152157 00:30:29.750 19:21:22 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 549152157 00:30:29.750 1 links removed 00:30:29.750 19:21:22 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:29.750 19:21:22 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:30:29.750 19:21:22 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:30:29.750 19:21:22 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:30:29.750 19:21:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:30:29.750 19:21:22 keyring_linux -- keyring/linux.sh@33 -- # sn=580056247 00:30:29.750 19:21:22 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 580056247 00:30:29.750 1 links removed 00:30:29.750 19:21:22 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1050097 00:30:29.750 19:21:22 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1050097 ']' 00:30:29.750 19:21:22 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1050097 00:30:29.750 19:21:22 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:30:29.750 19:21:22 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:29.751 19:21:22 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1050097 00:30:30.008 19:21:22 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:30.008 19:21:22 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:30.008 19:21:22 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1050097' 00:30:30.008 killing process with pid 1050097 00:30:30.008 19:21:22 keyring_linux -- common/autotest_common.sh@969 -- # kill 1050097 00:30:30.008 Received shutdown signal, test time was about 1.000000 seconds 00:30:30.008 00:30:30.008 Latency(us) 00:30:30.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.008 =================================================================================================================== 00:30:30.008 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:30.008 19:21:22 keyring_linux -- common/autotest_common.sh@974 -- # wait 1050097 00:30:30.267 19:21:22 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1049959 00:30:30.267 19:21:22 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1049959 ']' 00:30:30.267 19:21:22 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1049959 00:30:30.267 19:21:22 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:30:30.267 19:21:22 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:30.267 19:21:22 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1049959 00:30:30.267 19:21:22 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:30.267 19:21:22 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:30.267 19:21:22 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1049959' 00:30:30.267 killing process with pid 1049959 00:30:30.267 19:21:22 keyring_linux -- common/autotest_common.sh@969 -- # kill 1049959 00:30:30.267 19:21:22 keyring_linux -- common/autotest_common.sh@974 -- # wait 1049959 00:30:30.524 00:30:30.524 real 0m6.270s 00:30:30.524 user 0m11.719s 00:30:30.524 sys 0m1.555s 00:30:30.524 19:21:22 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:30.524 19:21:22 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:30.524 ************************************ 00:30:30.524 END TEST keyring_linux 00:30:30.524 ************************************ 00:30:30.524 19:21:22 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:30:30.524 19:21:22 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:30:30.524 19:21:22 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:30:30.524 19:21:22 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:30:30.524 19:21:22 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:30:30.524 19:21:22 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:30:30.524 19:21:22 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:30:30.524 19:21:22 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:30:30.524 19:21:22 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:30:30.524 19:21:22 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:30:30.524 19:21:22 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:30:30.524 19:21:22 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:30:30.524 19:21:22 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:30:30.524 19:21:22 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:30:30.524 19:21:22 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:30:30.524 19:21:22 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:30:30.524 19:21:22 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:30:30.524 19:21:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:30.524 19:21:22 -- common/autotest_common.sh@10 -- # set +x 00:30:30.524 19:21:22 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:30:30.524 19:21:22 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:30:30.524 19:21:22 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:30:30.524 19:21:22 -- common/autotest_common.sh@10 -- # set +x 00:30:32.463 INFO: APP EXITING 00:30:32.463 INFO: killing all VMs 00:30:32.463 INFO: killing vhost app 00:30:32.463 INFO: EXIT DONE 00:30:33.858 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:30:33.858 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:30:33.858 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:30:33.858 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:30:33.858 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:30:33.858 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:30:33.858 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:30:33.858 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:30:33.858 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:30:33.858 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:30:33.858 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:30:33.858 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:30:33.858 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:30:33.858 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:30:33.858 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:30:33.858 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:30:33.858 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:30:35.231 Cleaning 00:30:35.231 Removing: /var/run/dpdk/spdk0/config 00:30:35.231 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:35.231 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:35.231 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:35.231 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:35.231 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:35.231 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:35.231 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:35.231 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:35.231 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:35.231 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:35.231 Removing: /var/run/dpdk/spdk1/config 00:30:35.231 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:35.231 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:35.231 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:35.231 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:35.231 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:35.231 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:35.231 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:35.231 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:35.231 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:35.231 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:35.231 Removing: /var/run/dpdk/spdk1/mp_socket 00:30:35.231 Removing: /var/run/dpdk/spdk2/config 00:30:35.231 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:35.231 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:35.231 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:35.231 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:35.231 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:35.231 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:35.231 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:35.231 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:35.231 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:35.231 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:35.231 Removing: /var/run/dpdk/spdk3/config 00:30:35.231 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:35.231 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:35.231 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:35.231 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:35.231 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:35.231 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:35.231 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:35.231 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:35.231 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:35.231 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:35.231 Removing: /var/run/dpdk/spdk4/config 00:30:35.231 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:35.231 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:35.231 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:35.231 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:35.231 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:35.231 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:35.231 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:35.231 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:35.231 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:35.231 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:35.231 Removing: /dev/shm/bdev_svc_trace.1 00:30:35.231 Removing: /dev/shm/nvmf_trace.0 00:30:35.231 Removing: /dev/shm/spdk_tgt_trace.pid771153 00:30:35.231 Removing: /var/run/dpdk/spdk0 00:30:35.231 Removing: /var/run/dpdk/spdk1 00:30:35.231 Removing: /var/run/dpdk/spdk2 00:30:35.231 Removing: /var/run/dpdk/spdk3 00:30:35.231 Removing: /var/run/dpdk/spdk4 00:30:35.231 Removing: /var/run/dpdk/spdk_pid1007697 00:30:35.231 Removing: /var/run/dpdk/spdk_pid1008112 00:30:35.231 Removing: /var/run/dpdk/spdk_pid1008522 00:30:35.231 Removing: /var/run/dpdk/spdk_pid1008922 00:30:35.231 Removing: /var/run/dpdk/spdk_pid1009508 00:30:35.231 Removing: /var/run/dpdk/spdk_pid1010028 00:30:35.231 Removing: /var/run/dpdk/spdk_pid1010436 00:30:35.489 Removing: /var/run/dpdk/spdk_pid1010974 00:30:35.489 Removing: /var/run/dpdk/spdk_pid1013904 00:30:35.489 Removing: /var/run/dpdk/spdk_pid1014233 00:30:35.489 Removing: /var/run/dpdk/spdk_pid1018864 00:30:35.489 Removing: /var/run/dpdk/spdk_pid1019050 00:30:35.489 Removing: /var/run/dpdk/spdk_pid1020776 00:30:35.489 Removing: /var/run/dpdk/spdk_pid1026243 00:30:35.489 Removing: /var/run/dpdk/spdk_pid1026248 00:30:35.489 Removing: /var/run/dpdk/spdk_pid1029534 00:30:35.489 Removing: /var/run/dpdk/spdk_pid1030942 00:30:35.489 Removing: /var/run/dpdk/spdk_pid1032360 00:30:35.489 Removing: /var/run/dpdk/spdk_pid1033197 00:30:35.489 Removing: /var/run/dpdk/spdk_pid1034726 00:30:35.489 Removing: /var/run/dpdk/spdk_pid1035587 00:30:35.489 Removing: /var/run/dpdk/spdk_pid1041495 00:30:35.489 Removing: /var/run/dpdk/spdk_pid1041889 00:30:35.489 Removing: /var/run/dpdk/spdk_pid1042280 00:30:35.489 Removing: /var/run/dpdk/spdk_pid1043937 00:30:35.489 Removing: /var/run/dpdk/spdk_pid1044239 00:30:35.489 Removing: /var/run/dpdk/spdk_pid1044612 00:30:35.489 Removing: /var/run/dpdk/spdk_pid1047333 00:30:35.489 Removing: /var/run/dpdk/spdk_pid1047503 00:30:35.489 Removing: /var/run/dpdk/spdk_pid1049474 00:30:35.489 Removing: /var/run/dpdk/spdk_pid1049959 00:30:35.489 Removing: /var/run/dpdk/spdk_pid1050097 00:30:35.489 Removing: /var/run/dpdk/spdk_pid769594 00:30:35.489 Removing: /var/run/dpdk/spdk_pid770328 00:30:35.489 Removing: /var/run/dpdk/spdk_pid771153 00:30:35.489 Removing: /var/run/dpdk/spdk_pid771575 00:30:35.489 Removing: /var/run/dpdk/spdk_pid772268 00:30:35.489 Removing: /var/run/dpdk/spdk_pid772537 00:30:35.489 Removing: /var/run/dpdk/spdk_pid773365 00:30:35.489 Removing: /var/run/dpdk/spdk_pid773374 00:30:35.489 Removing: /var/run/dpdk/spdk_pid773622 00:30:35.489 Removing: /var/run/dpdk/spdk_pid775443 00:30:35.489 Removing: /var/run/dpdk/spdk_pid776362 00:30:35.489 Removing: /var/run/dpdk/spdk_pid776673 00:30:35.489 Removing: /var/run/dpdk/spdk_pid776860 00:30:35.489 Removing: /var/run/dpdk/spdk_pid777195 00:30:35.489 Removing: /var/run/dpdk/spdk_pid777385 00:30:35.489 Removing: /var/run/dpdk/spdk_pid777540 00:30:35.489 Removing: /var/run/dpdk/spdk_pid777700 00:30:35.489 Removing: /var/run/dpdk/spdk_pid777937 00:30:35.489 Removing: /var/run/dpdk/spdk_pid778322 00:30:35.489 Removing: /var/run/dpdk/spdk_pid780673 00:30:35.489 Removing: /var/run/dpdk/spdk_pid780933 00:30:35.489 Removing: /var/run/dpdk/spdk_pid781130 00:30:35.489 Removing: /var/run/dpdk/spdk_pid781267 00:30:35.489 Removing: /var/run/dpdk/spdk_pid781574 00:30:35.489 Removing: /var/run/dpdk/spdk_pid781701 00:30:35.489 Removing: /var/run/dpdk/spdk_pid782008 00:30:35.489 Removing: /var/run/dpdk/spdk_pid782146 00:30:35.489 Removing: /var/run/dpdk/spdk_pid782440 00:30:35.489 Removing: /var/run/dpdk/spdk_pid782446 00:30:35.489 Removing: /var/run/dpdk/spdk_pid782608 00:30:35.489 Removing: /var/run/dpdk/spdk_pid782746 00:30:35.489 Removing: /var/run/dpdk/spdk_pid783130 00:30:35.489 Removing: /var/run/dpdk/spdk_pid783389 00:30:35.489 Removing: /var/run/dpdk/spdk_pid783591 00:30:35.489 Removing: /var/run/dpdk/spdk_pid786086 00:30:35.489 Removing: /var/run/dpdk/spdk_pid788998 00:30:35.489 Removing: /var/run/dpdk/spdk_pid796283 00:30:35.489 Removing: /var/run/dpdk/spdk_pid796689 00:30:35.489 Removing: /var/run/dpdk/spdk_pid799623 00:30:35.489 Removing: /var/run/dpdk/spdk_pid799899 00:30:35.489 Removing: /var/run/dpdk/spdk_pid802828 00:30:35.489 Removing: /var/run/dpdk/spdk_pid807071 00:30:35.489 Removing: /var/run/dpdk/spdk_pid809767 00:30:35.489 Removing: /var/run/dpdk/spdk_pid817011 00:30:35.489 Removing: /var/run/dpdk/spdk_pid822938 00:30:35.489 Removing: /var/run/dpdk/spdk_pid824265 00:30:35.489 Removing: /var/run/dpdk/spdk_pid824939 00:30:35.489 Removing: /var/run/dpdk/spdk_pid836432 00:30:35.489 Removing: /var/run/dpdk/spdk_pid839125 00:30:35.489 Removing: /var/run/dpdk/spdk_pid868678 00:30:35.489 Removing: /var/run/dpdk/spdk_pid872258 00:30:35.489 Removing: /var/run/dpdk/spdk_pid876633 00:30:35.489 Removing: /var/run/dpdk/spdk_pid881017 00:30:35.489 Removing: /var/run/dpdk/spdk_pid881031 00:30:35.489 Removing: /var/run/dpdk/spdk_pid881571 00:30:35.489 Removing: /var/run/dpdk/spdk_pid882220 00:30:35.489 Removing: /var/run/dpdk/spdk_pid882879 00:30:35.489 Removing: /var/run/dpdk/spdk_pid883280 00:30:35.489 Removing: /var/run/dpdk/spdk_pid883286 00:30:35.489 Removing: /var/run/dpdk/spdk_pid883434 00:30:35.489 Removing: /var/run/dpdk/spdk_pid883555 00:30:35.489 Removing: /var/run/dpdk/spdk_pid883572 00:30:35.489 Removing: /var/run/dpdk/spdk_pid884220 00:30:35.489 Removing: /var/run/dpdk/spdk_pid884990 00:30:35.489 Removing: /var/run/dpdk/spdk_pid885777 00:30:35.489 Removing: /var/run/dpdk/spdk_pid886435 00:30:35.489 Removing: /var/run/dpdk/spdk_pid886553 00:30:35.489 Removing: /var/run/dpdk/spdk_pid886698 00:30:35.489 Removing: /var/run/dpdk/spdk_pid887719 00:30:35.489 Removing: /var/run/dpdk/spdk_pid888445 00:30:35.489 Removing: /var/run/dpdk/spdk_pid894199 00:30:35.489 Removing: /var/run/dpdk/spdk_pid920081 00:30:35.489 Removing: /var/run/dpdk/spdk_pid923288 00:30:35.489 Removing: /var/run/dpdk/spdk_pid924470 00:30:35.489 Removing: /var/run/dpdk/spdk_pid925779 00:30:35.489 Removing: /var/run/dpdk/spdk_pid925920 00:30:35.489 Removing: /var/run/dpdk/spdk_pid925963 00:30:35.489 Removing: /var/run/dpdk/spdk_pid926072 00:30:35.489 Removing: /var/run/dpdk/spdk_pid926504 00:30:35.489 Removing: /var/run/dpdk/spdk_pid927826 00:30:35.489 Removing: /var/run/dpdk/spdk_pid928684 00:30:35.489 Removing: /var/run/dpdk/spdk_pid929112 00:30:35.489 Removing: /var/run/dpdk/spdk_pid930778 00:30:35.489 Removing: /var/run/dpdk/spdk_pid931359 00:30:35.489 Removing: /var/run/dpdk/spdk_pid931886 00:30:35.489 Removing: /var/run/dpdk/spdk_pid934694 00:30:35.489 Removing: /var/run/dpdk/spdk_pid942172 00:30:35.489 Removing: /var/run/dpdk/spdk_pid944945 00:30:35.747 Removing: /var/run/dpdk/spdk_pid949014 00:30:35.747 Removing: /var/run/dpdk/spdk_pid950081 00:30:35.747 Removing: /var/run/dpdk/spdk_pid951193 00:30:35.747 Removing: /var/run/dpdk/spdk_pid954190 00:30:35.747 Removing: /var/run/dpdk/spdk_pid956964 00:30:35.747 Removing: /var/run/dpdk/spdk_pid961885 00:30:35.747 Removing: /var/run/dpdk/spdk_pid961888 00:30:35.747 Removing: /var/run/dpdk/spdk_pid965080 00:30:35.747 Removing: /var/run/dpdk/spdk_pid965331 00:30:35.747 Removing: /var/run/dpdk/spdk_pid965469 00:30:35.747 Removing: /var/run/dpdk/spdk_pid965740 00:30:35.747 Removing: /var/run/dpdk/spdk_pid965757 00:30:35.747 Removing: /var/run/dpdk/spdk_pid968923 00:30:35.747 Removing: /var/run/dpdk/spdk_pid969259 00:30:35.747 Removing: /var/run/dpdk/spdk_pid972330 00:30:35.747 Removing: /var/run/dpdk/spdk_pid974314 00:30:35.747 Removing: /var/run/dpdk/spdk_pid978265 00:30:35.747 Removing: /var/run/dpdk/spdk_pid982511 00:30:35.747 Removing: /var/run/dpdk/spdk_pid989150 00:30:35.747 Removing: /var/run/dpdk/spdk_pid994146 00:30:35.747 Removing: /var/run/dpdk/spdk_pid994148 00:30:35.747 Clean 00:30:35.747 19:21:28 -- common/autotest_common.sh@1451 -- # return 0 00:30:35.747 19:21:28 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:30:35.747 19:21:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:35.747 19:21:28 -- common/autotest_common.sh@10 -- # set +x 00:30:35.747 19:21:28 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:30:35.747 19:21:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:35.747 19:21:28 -- common/autotest_common.sh@10 -- # set +x 00:30:35.747 19:21:28 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:35.747 19:21:28 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:30:35.747 19:21:28 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:30:35.747 19:21:28 -- spdk/autotest.sh@395 -- # hash lcov 00:30:35.747 19:21:28 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:35.747 19:21:28 -- spdk/autotest.sh@397 -- # hostname 00:30:35.747 19:21:28 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:30:36.005 geninfo: WARNING: invalid characters removed from testname! 00:31:08.088 19:21:56 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:08.088 19:22:00 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:10.612 19:22:03 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:13.892 19:22:05 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:16.419 19:22:08 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:19.701 19:22:11 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:22.233 19:22:14 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:22.233 19:22:14 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:22.233 19:22:14 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:22.233 19:22:14 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.233 19:22:14 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.233 19:22:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.233 19:22:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.233 19:22:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.233 19:22:14 -- paths/export.sh@5 -- $ export PATH 00:31:22.233 19:22:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.233 19:22:14 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:31:22.233 19:22:14 -- common/autobuild_common.sh@447 -- $ date +%s 00:31:22.233 19:22:14 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721928134.XXXXXX 00:31:22.233 19:22:14 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721928134.52Znw5 00:31:22.233 19:22:14 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:31:22.233 19:22:14 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:31:22.233 19:22:14 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:31:22.233 19:22:14 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:31:22.233 19:22:14 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:31:22.233 19:22:14 -- common/autobuild_common.sh@463 -- $ get_config_params 00:31:22.233 19:22:14 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:31:22.233 19:22:14 -- common/autotest_common.sh@10 -- $ set +x 00:31:22.233 19:22:14 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:31:22.233 19:22:14 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:31:22.233 19:22:14 -- pm/common@17 -- $ local monitor 00:31:22.233 19:22:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:22.233 19:22:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:22.233 19:22:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:22.233 19:22:14 -- pm/common@21 -- $ date +%s 00:31:22.233 19:22:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:22.233 19:22:14 -- pm/common@21 -- $ date +%s 00:31:22.233 19:22:14 -- pm/common@25 -- $ sleep 1 00:31:22.233 19:22:14 -- pm/common@21 -- $ date +%s 00:31:22.233 19:22:14 -- pm/common@21 -- $ date +%s 00:31:22.233 19:22:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721928134 00:31:22.233 19:22:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721928134 00:31:22.233 19:22:14 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721928134 00:31:22.233 19:22:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721928134 00:31:22.233 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721928134_collect-vmstat.pm.log 00:31:22.233 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721928134_collect-cpu-load.pm.log 00:31:22.233 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721928134_collect-cpu-temp.pm.log 00:31:22.233 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721928134_collect-bmc-pm.bmc.pm.log 00:31:23.168 19:22:15 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:31:23.168 19:22:15 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:31:23.168 19:22:15 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:23.168 19:22:15 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:23.168 19:22:15 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:23.168 19:22:15 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:23.168 19:22:15 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:23.168 19:22:15 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:23.168 19:22:15 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:23.168 19:22:15 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:23.168 19:22:15 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:23.168 19:22:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:31:23.168 19:22:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:31:23.168 19:22:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:23.168 19:22:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:31:23.168 19:22:15 -- pm/common@44 -- $ pid=1060165 00:31:23.168 19:22:15 -- pm/common@50 -- $ kill -TERM 1060165 00:31:23.168 19:22:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:23.168 19:22:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:31:23.168 19:22:15 -- pm/common@44 -- $ pid=1060167 00:31:23.168 19:22:15 -- pm/common@50 -- $ kill -TERM 1060167 00:31:23.168 19:22:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:23.168 19:22:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:31:23.168 19:22:15 -- pm/common@44 -- $ pid=1060169 00:31:23.168 19:22:15 -- pm/common@50 -- $ kill -TERM 1060169 00:31:23.168 19:22:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:23.168 19:22:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:31:23.168 19:22:15 -- pm/common@44 -- $ pid=1060196 00:31:23.168 19:22:15 -- pm/common@50 -- $ sudo -E kill -TERM 1060196 00:31:23.168 + [[ -n 682858 ]] 00:31:23.168 + sudo kill 682858 00:31:23.436 [Pipeline] } 00:31:23.455 [Pipeline] // stage 00:31:23.461 [Pipeline] } 00:31:23.481 [Pipeline] // timeout 00:31:23.487 [Pipeline] } 00:31:23.504 [Pipeline] // catchError 00:31:23.510 [Pipeline] } 00:31:23.529 [Pipeline] // wrap 00:31:23.536 [Pipeline] } 00:31:23.553 [Pipeline] // catchError 00:31:23.563 [Pipeline] stage 00:31:23.566 [Pipeline] { (Epilogue) 00:31:23.581 [Pipeline] catchError 00:31:23.583 [Pipeline] { 00:31:23.599 [Pipeline] echo 00:31:23.601 Cleanup processes 00:31:23.607 [Pipeline] sh 00:31:23.914 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:23.914 1060298 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:31:23.914 1060434 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:23.928 [Pipeline] sh 00:31:24.210 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:24.210 ++ grep -v 'sudo pgrep' 00:31:24.210 ++ awk '{print $1}' 00:31:24.210 + sudo kill -9 1060298 00:31:24.222 [Pipeline] sh 00:31:24.504 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:34.485 [Pipeline] sh 00:31:34.769 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:34.769 Artifacts sizes are good 00:31:34.784 [Pipeline] archiveArtifacts 00:31:34.791 Archiving artifacts 00:31:35.043 [Pipeline] sh 00:31:35.325 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:31:35.339 [Pipeline] cleanWs 00:31:35.349 [WS-CLEANUP] Deleting project workspace... 00:31:35.350 [WS-CLEANUP] Deferred wipeout is used... 00:31:35.357 [WS-CLEANUP] done 00:31:35.359 [Pipeline] } 00:31:35.380 [Pipeline] // catchError 00:31:35.392 [Pipeline] sh 00:31:35.671 + logger -p user.info -t JENKINS-CI 00:31:35.679 [Pipeline] } 00:31:35.696 [Pipeline] // stage 00:31:35.701 [Pipeline] } 00:31:35.717 [Pipeline] // node 00:31:35.723 [Pipeline] End of Pipeline 00:31:35.758 Finished: SUCCESS